Test Report: KVM_Linux_crio 19452

                    
                      667295c6870455ef3392c60a87bf7f5fdc211f00:2024-08-16:35803
                    
                

Test fail (30/318)

Order failed test Duration
34 TestAddons/parallel/Ingress 154.62
36 TestAddons/parallel/MetricsServer 306.23
45 TestAddons/StoppedEnableDisable 154.29
139 TestFunctional/parallel/ImageCommands/ImageRemove 2.9
164 TestMultiControlPlane/serial/StopSecondaryNode 141.83
166 TestMultiControlPlane/serial/RestartSecondaryNode 58.88
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 379.13
171 TestMultiControlPlane/serial/StopCluster 141.88
231 TestMultiNode/serial/RestartKeepsNodes 324.35
233 TestMultiNode/serial/StopMultiNode 141.44
240 TestPreload 275.28
248 TestKubernetesUpgrade 376.5
320 TestStartStop/group/old-k8s-version/serial/FirstStart 300.29
345 TestStartStop/group/no-preload/serial/Stop 139.07
350 TestStartStop/group/embed-certs/serial/Stop 138.97
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 139
352 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 110.37
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
362 TestStartStop/group/old-k8s-version/serial/SecondStart 747.61
363 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.23
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.31
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.41
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.54
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 427.91
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 442.69
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 310.73
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 96.78
x
+
TestAddons/parallel/Ingress (154.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-517040 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-517040 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-517040 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5c0b5079-ac0c-4418-9904-70626aa5e8a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5c0b5079-ac0c-4418-9904-70626aa5e8a0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003977804s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-517040 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.351101395s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-517040 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.72
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-517040 addons disable ingress-dns --alsologtostderr -v=1: (1.650553857s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-517040 addons disable ingress --alsologtostderr -v=1: (7.695435041s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-517040 -n addons-517040
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-517040 logs -n 25: (1.175861198s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-218888                                                                     | download-only-218888 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
	| delete  | -p download-only-195850                                                                     | download-only-195850 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-071536 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |                     |
	|         | binary-mirror-071536                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39393                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-071536                                                                     | binary-mirror-071536 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
	| addons  | disable dashboard -p                                                                        | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |                     |
	|         | addons-517040                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |                     |
	|         | addons-517040                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-517040 --wait=true                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:07 UTC | 15 Aug 24 23:08 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | addons-517040                                                                               |                      |         |         |                     |                     |
	| ip      | addons-517040 ip                                                                            | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-517040 ssh cat                                                                       | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | /opt/local-path-provisioner/pvc-e577ed7e-383c-4543-b504-630414b64b8d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:09 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | -p addons-517040                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-517040 ssh curl -s                                                                   | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-517040 addons                                                                        | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-517040 addons                                                                        | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | addons-517040                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | -p addons-517040                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-517040 ip                                                                            | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:11 UTC | 15 Aug 24 23:11 UTC |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:11 UTC | 15 Aug 24 23:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:11 UTC | 15 Aug 24 23:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:05:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:05:40.726703   20724 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:05:40.726808   20724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:05:40.726837   20724 out.go:358] Setting ErrFile to fd 2...
	I0815 23:05:40.726843   20724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:05:40.727048   20724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:05:40.727631   20724 out.go:352] Setting JSON to false
	I0815 23:05:40.728417   20724 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2841,"bootTime":1723760300,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:05:40.728471   20724 start.go:139] virtualization: kvm guest
	I0815 23:05:40.730343   20724 out.go:177] * [addons-517040] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:05:40.731488   20724 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:05:40.731489   20724 notify.go:220] Checking for updates...
	I0815 23:05:40.733942   20724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:05:40.735214   20724 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:05:40.736269   20724 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:05:40.737460   20724 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:05:40.738705   20724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:05:40.740038   20724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:05:40.771541   20724 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 23:05:40.772872   20724 start.go:297] selected driver: kvm2
	I0815 23:05:40.772898   20724 start.go:901] validating driver "kvm2" against <nil>
	I0815 23:05:40.772909   20724 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:05:40.773596   20724 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:05:40.773673   20724 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:05:40.788752   20724 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:05:40.788797   20724 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 23:05:40.789019   20724 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:05:40.789078   20724 cni.go:84] Creating CNI manager for ""
	I0815 23:05:40.789091   20724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 23:05:40.789098   20724 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 23:05:40.789145   20724 start.go:340] cluster config:
	{Name:addons-517040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:05:40.789237   20724 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:05:40.791212   20724 out.go:177] * Starting "addons-517040" primary control-plane node in "addons-517040" cluster
	I0815 23:05:40.792445   20724 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:05:40.792483   20724 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:05:40.792493   20724 cache.go:56] Caching tarball of preloaded images
	I0815 23:05:40.792581   20724 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:05:40.792594   20724 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:05:40.792886   20724 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/config.json ...
	I0815 23:05:40.792910   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/config.json: {Name:mkc068a6cb6d319d2d53c22ac1e2ab4c83706ce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:05:40.793075   20724 start.go:360] acquireMachinesLock for addons-517040: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:05:40.793137   20724 start.go:364] duration metric: took 42.072µs to acquireMachinesLock for "addons-517040"
	I0815 23:05:40.793161   20724 start.go:93] Provisioning new machine with config: &{Name:addons-517040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:05:40.793221   20724 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 23:05:40.794853   20724 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0815 23:05:40.794983   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:05:40.795023   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:05:40.809129   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0815 23:05:40.809515   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:05:40.810087   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:05:40.810108   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:05:40.810436   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:05:40.810622   20724 main.go:141] libmachine: (addons-517040) Calling .GetMachineName
	I0815 23:05:40.810749   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:05:40.810898   20724 start.go:159] libmachine.API.Create for "addons-517040" (driver="kvm2")
	I0815 23:05:40.810923   20724 client.go:168] LocalClient.Create starting
	I0815 23:05:40.810965   20724 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0815 23:05:40.936183   20724 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0815 23:05:41.109642   20724 main.go:141] libmachine: Running pre-create checks...
	I0815 23:05:41.109670   20724 main.go:141] libmachine: (addons-517040) Calling .PreCreateCheck
	I0815 23:05:41.110190   20724 main.go:141] libmachine: (addons-517040) Calling .GetConfigRaw
	I0815 23:05:41.110600   20724 main.go:141] libmachine: Creating machine...
	I0815 23:05:41.110614   20724 main.go:141] libmachine: (addons-517040) Calling .Create
	I0815 23:05:41.110753   20724 main.go:141] libmachine: (addons-517040) Creating KVM machine...
	I0815 23:05:41.112029   20724 main.go:141] libmachine: (addons-517040) DBG | found existing default KVM network
	I0815 23:05:41.112710   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.112560   20746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0815 23:05:41.112770   20724 main.go:141] libmachine: (addons-517040) DBG | created network xml: 
	I0815 23:05:41.112795   20724 main.go:141] libmachine: (addons-517040) DBG | <network>
	I0815 23:05:41.112807   20724 main.go:141] libmachine: (addons-517040) DBG |   <name>mk-addons-517040</name>
	I0815 23:05:41.112819   20724 main.go:141] libmachine: (addons-517040) DBG |   <dns enable='no'/>
	I0815 23:05:41.112830   20724 main.go:141] libmachine: (addons-517040) DBG |   
	I0815 23:05:41.112844   20724 main.go:141] libmachine: (addons-517040) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 23:05:41.112858   20724 main.go:141] libmachine: (addons-517040) DBG |     <dhcp>
	I0815 23:05:41.112870   20724 main.go:141] libmachine: (addons-517040) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 23:05:41.112894   20724 main.go:141] libmachine: (addons-517040) DBG |     </dhcp>
	I0815 23:05:41.112916   20724 main.go:141] libmachine: (addons-517040) DBG |   </ip>
	I0815 23:05:41.112971   20724 main.go:141] libmachine: (addons-517040) DBG |   
	I0815 23:05:41.113006   20724 main.go:141] libmachine: (addons-517040) DBG | </network>
	I0815 23:05:41.113020   20724 main.go:141] libmachine: (addons-517040) DBG | 
	I0815 23:05:41.118098   20724 main.go:141] libmachine: (addons-517040) DBG | trying to create private KVM network mk-addons-517040 192.168.39.0/24...
	I0815 23:05:41.180819   20724 main.go:141] libmachine: (addons-517040) DBG | private KVM network mk-addons-517040 192.168.39.0/24 created
	I0815 23:05:41.180849   20724 main.go:141] libmachine: (addons-517040) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040 ...
	I0815 23:05:41.180877   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.180788   20746 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:05:41.180899   20724 main.go:141] libmachine: (addons-517040) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:05:41.180981   20724 main.go:141] libmachine: (addons-517040) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 23:05:41.428023   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.427909   20746 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa...
	I0815 23:05:41.521941   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.521785   20746 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/addons-517040.rawdisk...
	I0815 23:05:41.521971   20724 main.go:141] libmachine: (addons-517040) DBG | Writing magic tar header
	I0815 23:05:41.521986   20724 main.go:141] libmachine: (addons-517040) DBG | Writing SSH key tar header
	I0815 23:05:41.521997   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.521931   20746 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040 ...
	I0815 23:05:41.522077   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040
	I0815 23:05:41.522099   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0815 23:05:41.522111   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040 (perms=drwx------)
	I0815 23:05:41.522127   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0815 23:05:41.522139   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0815 23:05:41.522155   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:05:41.522166   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0815 23:05:41.522183   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 23:05:41.522196   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 23:05:41.522214   20724 main.go:141] libmachine: (addons-517040) Creating domain...
	I0815 23:05:41.522227   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0815 23:05:41.522253   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 23:05:41.522270   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins
	I0815 23:05:41.522280   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home
	I0815 23:05:41.522291   20724 main.go:141] libmachine: (addons-517040) DBG | Skipping /home - not owner
	I0815 23:05:41.523151   20724 main.go:141] libmachine: (addons-517040) define libvirt domain using xml: 
	I0815 23:05:41.523186   20724 main.go:141] libmachine: (addons-517040) <domain type='kvm'>
	I0815 23:05:41.523196   20724 main.go:141] libmachine: (addons-517040)   <name>addons-517040</name>
	I0815 23:05:41.523203   20724 main.go:141] libmachine: (addons-517040)   <memory unit='MiB'>4000</memory>
	I0815 23:05:41.523232   20724 main.go:141] libmachine: (addons-517040)   <vcpu>2</vcpu>
	I0815 23:05:41.523252   20724 main.go:141] libmachine: (addons-517040)   <features>
	I0815 23:05:41.523263   20724 main.go:141] libmachine: (addons-517040)     <acpi/>
	I0815 23:05:41.523272   20724 main.go:141] libmachine: (addons-517040)     <apic/>
	I0815 23:05:41.523279   20724 main.go:141] libmachine: (addons-517040)     <pae/>
	I0815 23:05:41.523286   20724 main.go:141] libmachine: (addons-517040)     
	I0815 23:05:41.523291   20724 main.go:141] libmachine: (addons-517040)   </features>
	I0815 23:05:41.523296   20724 main.go:141] libmachine: (addons-517040)   <cpu mode='host-passthrough'>
	I0815 23:05:41.523304   20724 main.go:141] libmachine: (addons-517040)   
	I0815 23:05:41.523311   20724 main.go:141] libmachine: (addons-517040)   </cpu>
	I0815 23:05:41.523323   20724 main.go:141] libmachine: (addons-517040)   <os>
	I0815 23:05:41.523335   20724 main.go:141] libmachine: (addons-517040)     <type>hvm</type>
	I0815 23:05:41.523348   20724 main.go:141] libmachine: (addons-517040)     <boot dev='cdrom'/>
	I0815 23:05:41.523358   20724 main.go:141] libmachine: (addons-517040)     <boot dev='hd'/>
	I0815 23:05:41.523378   20724 main.go:141] libmachine: (addons-517040)     <bootmenu enable='no'/>
	I0815 23:05:41.523385   20724 main.go:141] libmachine: (addons-517040)   </os>
	I0815 23:05:41.523391   20724 main.go:141] libmachine: (addons-517040)   <devices>
	I0815 23:05:41.523399   20724 main.go:141] libmachine: (addons-517040)     <disk type='file' device='cdrom'>
	I0815 23:05:41.523421   20724 main.go:141] libmachine: (addons-517040)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/boot2docker.iso'/>
	I0815 23:05:41.523435   20724 main.go:141] libmachine: (addons-517040)       <target dev='hdc' bus='scsi'/>
	I0815 23:05:41.523448   20724 main.go:141] libmachine: (addons-517040)       <readonly/>
	I0815 23:05:41.523457   20724 main.go:141] libmachine: (addons-517040)     </disk>
	I0815 23:05:41.523471   20724 main.go:141] libmachine: (addons-517040)     <disk type='file' device='disk'>
	I0815 23:05:41.523484   20724 main.go:141] libmachine: (addons-517040)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 23:05:41.523498   20724 main.go:141] libmachine: (addons-517040)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/addons-517040.rawdisk'/>
	I0815 23:05:41.523513   20724 main.go:141] libmachine: (addons-517040)       <target dev='hda' bus='virtio'/>
	I0815 23:05:41.523525   20724 main.go:141] libmachine: (addons-517040)     </disk>
	I0815 23:05:41.523536   20724 main.go:141] libmachine: (addons-517040)     <interface type='network'>
	I0815 23:05:41.523548   20724 main.go:141] libmachine: (addons-517040)       <source network='mk-addons-517040'/>
	I0815 23:05:41.523559   20724 main.go:141] libmachine: (addons-517040)       <model type='virtio'/>
	I0815 23:05:41.523568   20724 main.go:141] libmachine: (addons-517040)     </interface>
	I0815 23:05:41.523580   20724 main.go:141] libmachine: (addons-517040)     <interface type='network'>
	I0815 23:05:41.523593   20724 main.go:141] libmachine: (addons-517040)       <source network='default'/>
	I0815 23:05:41.523603   20724 main.go:141] libmachine: (addons-517040)       <model type='virtio'/>
	I0815 23:05:41.523612   20724 main.go:141] libmachine: (addons-517040)     </interface>
	I0815 23:05:41.523622   20724 main.go:141] libmachine: (addons-517040)     <serial type='pty'>
	I0815 23:05:41.523635   20724 main.go:141] libmachine: (addons-517040)       <target port='0'/>
	I0815 23:05:41.523645   20724 main.go:141] libmachine: (addons-517040)     </serial>
	I0815 23:05:41.523657   20724 main.go:141] libmachine: (addons-517040)     <console type='pty'>
	I0815 23:05:41.523676   20724 main.go:141] libmachine: (addons-517040)       <target type='serial' port='0'/>
	I0815 23:05:41.523688   20724 main.go:141] libmachine: (addons-517040)     </console>
	I0815 23:05:41.523698   20724 main.go:141] libmachine: (addons-517040)     <rng model='virtio'>
	I0815 23:05:41.523707   20724 main.go:141] libmachine: (addons-517040)       <backend model='random'>/dev/random</backend>
	I0815 23:05:41.523717   20724 main.go:141] libmachine: (addons-517040)     </rng>
	I0815 23:05:41.523728   20724 main.go:141] libmachine: (addons-517040)     
	I0815 23:05:41.523737   20724 main.go:141] libmachine: (addons-517040)     
	I0815 23:05:41.523758   20724 main.go:141] libmachine: (addons-517040)   </devices>
	I0815 23:05:41.523772   20724 main.go:141] libmachine: (addons-517040) </domain>
	I0815 23:05:41.523795   20724 main.go:141] libmachine: (addons-517040) 
	I0815 23:05:41.530244   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:87:c9:7f in network default
	I0815 23:05:41.530913   20724 main.go:141] libmachine: (addons-517040) Ensuring networks are active...
	I0815 23:05:41.530939   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:41.531486   20724 main.go:141] libmachine: (addons-517040) Ensuring network default is active
	I0815 23:05:41.531773   20724 main.go:141] libmachine: (addons-517040) Ensuring network mk-addons-517040 is active
	I0815 23:05:41.532441   20724 main.go:141] libmachine: (addons-517040) Getting domain xml...
	I0815 23:05:41.533123   20724 main.go:141] libmachine: (addons-517040) Creating domain...
	I0815 23:05:42.934407   20724 main.go:141] libmachine: (addons-517040) Waiting to get IP...
	I0815 23:05:42.935349   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:42.935668   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:42.935706   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:42.935673   20746 retry.go:31] will retry after 237.590583ms: waiting for machine to come up
	I0815 23:05:43.175044   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:43.175445   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:43.175470   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:43.175402   20746 retry.go:31] will retry after 264.338969ms: waiting for machine to come up
	I0815 23:05:43.441710   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:43.442105   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:43.442132   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:43.442062   20746 retry.go:31] will retry after 302.741357ms: waiting for machine to come up
	I0815 23:05:43.746671   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:43.747144   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:43.747166   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:43.747122   20746 retry.go:31] will retry after 440.364326ms: waiting for machine to come up
	I0815 23:05:44.188535   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:44.188961   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:44.188985   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:44.188907   20746 retry.go:31] will retry after 630.018255ms: waiting for machine to come up
	I0815 23:05:44.820607   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:44.821012   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:44.821040   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:44.820964   20746 retry.go:31] will retry after 605.591929ms: waiting for machine to come up
	I0815 23:05:45.427623   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:45.427941   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:45.427971   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:45.427893   20746 retry.go:31] will retry after 754.34659ms: waiting for machine to come up
	I0815 23:05:46.183452   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:46.183737   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:46.183768   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:46.183723   20746 retry.go:31] will retry after 981.167966ms: waiting for machine to come up
	I0815 23:05:47.166157   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:47.166527   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:47.166553   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:47.166480   20746 retry.go:31] will retry after 1.531776262s: waiting for machine to come up
	I0815 23:05:48.699382   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:48.699721   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:48.699759   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:48.699672   20746 retry.go:31] will retry after 1.472107504s: waiting for machine to come up
	I0815 23:05:50.174440   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:50.174768   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:50.174794   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:50.174723   20746 retry.go:31] will retry after 1.871938627s: waiting for machine to come up
	I0815 23:05:52.048950   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:52.049332   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:52.049360   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:52.049310   20746 retry.go:31] will retry after 3.372664612s: waiting for machine to come up
	I0815 23:05:55.425961   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:55.426376   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:55.426399   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:55.426326   20746 retry.go:31] will retry after 2.813207941s: waiting for machine to come up
	I0815 23:05:58.242815   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:58.243240   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:58.243264   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:58.243203   20746 retry.go:31] will retry after 5.142110925s: waiting for machine to come up
	I0815 23:06:03.388238   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.388671   20724 main.go:141] libmachine: (addons-517040) Found IP for machine: 192.168.39.72
	I0815 23:06:03.388694   20724 main.go:141] libmachine: (addons-517040) Reserving static IP address...
	I0815 23:06:03.388709   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has current primary IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.389029   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find host DHCP lease matching {name: "addons-517040", mac: "52:54:00:df:98:d5", ip: "192.168.39.72"} in network mk-addons-517040
	I0815 23:06:03.462668   20724 main.go:141] libmachine: (addons-517040) DBG | Getting to WaitForSSH function...
	I0815 23:06:03.462696   20724 main.go:141] libmachine: (addons-517040) Reserved static IP address: 192.168.39.72
	I0815 23:06:03.462709   20724 main.go:141] libmachine: (addons-517040) Waiting for SSH to be available...
	I0815 23:06:03.465297   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.465742   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.465769   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.465946   20724 main.go:141] libmachine: (addons-517040) DBG | Using SSH client type: external
	I0815 23:06:03.465982   20724 main.go:141] libmachine: (addons-517040) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa (-rw-------)
	I0815 23:06:03.466018   20724 main.go:141] libmachine: (addons-517040) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:06:03.466032   20724 main.go:141] libmachine: (addons-517040) DBG | About to run SSH command:
	I0815 23:06:03.466047   20724 main.go:141] libmachine: (addons-517040) DBG | exit 0
	I0815 23:06:03.598221   20724 main.go:141] libmachine: (addons-517040) DBG | SSH cmd err, output: <nil>: 
	I0815 23:06:03.598521   20724 main.go:141] libmachine: (addons-517040) KVM machine creation complete!
	I0815 23:06:03.598864   20724 main.go:141] libmachine: (addons-517040) Calling .GetConfigRaw
	I0815 23:06:03.599370   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:03.599555   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:03.599717   20724 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 23:06:03.599732   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:03.600877   20724 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 23:06:03.600890   20724 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 23:06:03.600895   20724 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 23:06:03.600901   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:03.604210   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.604599   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.604639   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.604764   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:03.604951   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.605086   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.605229   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:03.605369   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:03.605561   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:03.605578   20724 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 23:06:03.705240   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:06:03.705259   20724 main.go:141] libmachine: Detecting the provisioner...
	I0815 23:06:03.705266   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:03.708105   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.708447   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.708482   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.708667   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:03.708863   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.709023   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.709162   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:03.709288   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:03.709451   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:03.709461   20724 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 23:06:03.810794   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 23:06:03.810865   20724 main.go:141] libmachine: found compatible host: buildroot
	I0815 23:06:03.810871   20724 main.go:141] libmachine: Provisioning with buildroot...
	I0815 23:06:03.810878   20724 main.go:141] libmachine: (addons-517040) Calling .GetMachineName
	I0815 23:06:03.811116   20724 buildroot.go:166] provisioning hostname "addons-517040"
	I0815 23:06:03.811138   20724 main.go:141] libmachine: (addons-517040) Calling .GetMachineName
	I0815 23:06:03.811326   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:03.813732   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.814132   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.814163   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.814307   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:03.814508   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.814722   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.814889   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:03.815070   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:03.815301   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:03.815318   20724 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-517040 && echo "addons-517040" | sudo tee /etc/hostname
	I0815 23:06:03.929270   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-517040
	
	I0815 23:06:03.929298   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:03.932137   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.932531   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.932560   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.932716   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:03.932924   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.933068   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.933212   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:03.933385   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:03.933564   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:03.933589   20724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-517040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-517040/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-517040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:06:04.043659   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:06:04.043691   20724 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:06:04.043725   20724 buildroot.go:174] setting up certificates
	I0815 23:06:04.043756   20724 provision.go:84] configureAuth start
	I0815 23:06:04.043769   20724 main.go:141] libmachine: (addons-517040) Calling .GetMachineName
	I0815 23:06:04.044106   20724 main.go:141] libmachine: (addons-517040) Calling .GetIP
	I0815 23:06:04.046931   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.047318   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.047345   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.047489   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.049926   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.050308   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.050345   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.050442   20724 provision.go:143] copyHostCerts
	I0815 23:06:04.050515   20724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:06:04.050634   20724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:06:04.050727   20724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:06:04.050783   20724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.addons-517040 san=[127.0.0.1 192.168.39.72 addons-517040 localhost minikube]
	I0815 23:06:04.369628   20724 provision.go:177] copyRemoteCerts
	I0815 23:06:04.369681   20724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:06:04.369708   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.372443   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.372919   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.372948   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.373103   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:04.373299   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.373426   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:04.373563   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:04.452210   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 23:06:04.477044   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 23:06:04.505359   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:06:04.531096   20724 provision.go:87] duration metric: took 487.322626ms to configureAuth
	I0815 23:06:04.531133   20724 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:06:04.531322   20724 config.go:182] Loaded profile config "addons-517040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:06:04.531392   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.534467   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.534693   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.534719   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.534897   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:04.535126   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.535306   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.535462   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:04.535626   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:04.535828   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:04.535850   20724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:06:04.793582   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:06:04.793609   20724 main.go:141] libmachine: Checking connection to Docker...
	I0815 23:06:04.793617   20724 main.go:141] libmachine: (addons-517040) Calling .GetURL
	I0815 23:06:04.794933   20724 main.go:141] libmachine: (addons-517040) DBG | Using libvirt version 6000000
	I0815 23:06:04.797318   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.797703   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.797729   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.797936   20724 main.go:141] libmachine: Docker is up and running!
	I0815 23:06:04.797951   20724 main.go:141] libmachine: Reticulating splines...
	I0815 23:06:04.797959   20724 client.go:171] duration metric: took 23.987028884s to LocalClient.Create
	I0815 23:06:04.797995   20724 start.go:167] duration metric: took 23.987088847s to libmachine.API.Create "addons-517040"
	I0815 23:06:04.798016   20724 start.go:293] postStartSetup for "addons-517040" (driver="kvm2")
	I0815 23:06:04.798030   20724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:06:04.798055   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.798317   20724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:06:04.798340   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.800645   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.801049   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.801074   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.801191   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:04.801358   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.801477   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:04.801577   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:04.881171   20724 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:06:04.885602   20724 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:06:04.885640   20724 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:06:04.885718   20724 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:06:04.885748   20724 start.go:296] duration metric: took 87.724897ms for postStartSetup
	I0815 23:06:04.885784   20724 main.go:141] libmachine: (addons-517040) Calling .GetConfigRaw
	I0815 23:06:04.886343   20724 main.go:141] libmachine: (addons-517040) Calling .GetIP
	I0815 23:06:04.888928   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.889436   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.889468   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.889749   20724 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/config.json ...
	I0815 23:06:04.889976   20724 start.go:128] duration metric: took 24.096745212s to createHost
	I0815 23:06:04.890001   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.892676   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.893009   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.893044   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.893171   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:04.893468   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.893644   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.893789   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:04.893995   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:04.894198   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:04.894211   20724 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:06:04.994780   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723763164.969432037
	
	I0815 23:06:04.994806   20724 fix.go:216] guest clock: 1723763164.969432037
	I0815 23:06:04.994816   20724 fix.go:229] Guest: 2024-08-15 23:06:04.969432037 +0000 UTC Remote: 2024-08-15 23:06:04.88999088 +0000 UTC m=+24.196938035 (delta=79.441157ms)
	I0815 23:06:04.994847   20724 fix.go:200] guest clock delta is within tolerance: 79.441157ms
	I0815 23:06:04.994854   20724 start.go:83] releasing machines lock for "addons-517040", held for 24.201703154s
	I0815 23:06:04.994882   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.995181   20724 main.go:141] libmachine: (addons-517040) Calling .GetIP
	I0815 23:06:04.998178   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.998557   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.998586   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.998753   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.999223   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.999381   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.999447   20724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:06:04.999495   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.999555   20724 ssh_runner.go:195] Run: cat /version.json
	I0815 23:06:04.999579   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:05.002277   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:05.002336   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:05.002605   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:05.002630   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:05.002729   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:05.002759   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:05.002775   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:05.002973   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:05.002984   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:05.003129   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:05.003130   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:05.003296   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:05.003300   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:05.003438   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:05.079137   20724 ssh_runner.go:195] Run: systemctl --version
	I0815 23:06:05.101347   20724 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:06:05.267014   20724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:06:05.273781   20724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:06:05.273869   20724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:06:05.289994   20724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 23:06:05.290019   20724 start.go:495] detecting cgroup driver to use...
	I0815 23:06:05.290079   20724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:06:05.306594   20724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:06:05.321261   20724 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:06:05.321329   20724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:06:05.335681   20724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:06:05.349863   20724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:06:05.468337   20724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:06:05.633912   20724 docker.go:233] disabling docker service ...
	I0815 23:06:05.633989   20724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:06:05.648827   20724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:06:05.662175   20724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:06:05.785120   20724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:06:05.906861   20724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:06:05.921576   20724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:06:05.940062   20724 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:06:05.940120   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:05.951117   20724 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:06:05.951177   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:05.961972   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:05.973232   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:05.984841   20724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:06:05.995999   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:06.007017   20724 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:06.024835   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:06.036277   20724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:06:06.046555   20724 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 23:06:06.046609   20724 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 23:06:06.064414   20724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:06:06.075463   20724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:06:06.203560   20724 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:06:06.342762   20724 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:06:06.342856   20724 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:06:06.348108   20724 start.go:563] Will wait 60s for crictl version
	I0815 23:06:06.348179   20724 ssh_runner.go:195] Run: which crictl
	I0815 23:06:06.352199   20724 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:06:06.394874   20724 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:06:06.395030   20724 ssh_runner.go:195] Run: crio --version
	I0815 23:06:06.423477   20724 ssh_runner.go:195] Run: crio --version
	I0815 23:06:06.454105   20724 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:06:06.455653   20724 main.go:141] libmachine: (addons-517040) Calling .GetIP
	I0815 23:06:06.458156   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:06.458574   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:06.458593   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:06.458849   20724 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:06:06.463103   20724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:06:06.475568   20724 kubeadm.go:883] updating cluster {Name:addons-517040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 23:06:06.475665   20724 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:06:06.475716   20724 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:06:06.507434   20724 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 23:06:06.507503   20724 ssh_runner.go:195] Run: which lz4
	I0815 23:06:06.511640   20724 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 23:06:06.516044   20724 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 23:06:06.516075   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 23:06:07.811863   20724 crio.go:462] duration metric: took 1.300248738s to copy over tarball
	I0815 23:06:07.811948   20724 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 23:06:10.065094   20724 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253115362s)
	I0815 23:06:10.065122   20724 crio.go:469] duration metric: took 2.253234314s to extract the tarball
	I0815 23:06:10.065129   20724 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 23:06:10.102357   20724 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:06:10.143841   20724 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:06:10.143862   20724 cache_images.go:84] Images are preloaded, skipping loading
	I0815 23:06:10.143869   20724 kubeadm.go:934] updating node { 192.168.39.72 8443 v1.31.0 crio true true} ...
	I0815 23:06:10.143980   20724 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-517040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:06:10.144057   20724 ssh_runner.go:195] Run: crio config
	I0815 23:06:10.196704   20724 cni.go:84] Creating CNI manager for ""
	I0815 23:06:10.196728   20724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 23:06:10.196741   20724 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 23:06:10.196760   20724 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.72 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-517040 NodeName:addons-517040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 23:06:10.196930   20724 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-517040"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 23:06:10.196998   20724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:06:10.207497   20724 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 23:06:10.207566   20724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 23:06:10.217765   20724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0815 23:06:10.234807   20724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:06:10.251575   20724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0815 23:06:10.268168   20724 ssh_runner.go:195] Run: grep 192.168.39.72	control-plane.minikube.internal$ /etc/hosts
	I0815 23:06:10.272116   20724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:06:10.284591   20724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:06:10.410176   20724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:06:10.428192   20724 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040 for IP: 192.168.39.72
	I0815 23:06:10.428221   20724 certs.go:194] generating shared ca certs ...
	I0815 23:06:10.428240   20724 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.428411   20724 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:06:10.719434   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt ...
	I0815 23:06:10.719464   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt: {Name:mk35b78ed0b44898f8fccf955c44667fbeeb3aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.719651   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key ...
	I0815 23:06:10.719665   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key: {Name:mkfe2022f0e76b2546b591d45db0a65a8271ee44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.719777   20724 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:06:10.820931   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt ...
	I0815 23:06:10.820954   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt: {Name:mk18b07ecaae2f5d5b1d2b1190f207b4fbce25e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.821127   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key ...
	I0815 23:06:10.821141   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key: {Name:mk11e730b8ff488a6904be1f740c8f279d9244f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.821232   20724 certs.go:256] generating profile certs ...
	I0815 23:06:10.821283   20724 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.key
	I0815 23:06:10.821300   20724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt with IP's: []
	I0815 23:06:11.058179   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt ...
	I0815 23:06:11.058208   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: {Name:mk0e4ed4fc2b71853657271fe26848031c301741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.058411   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.key ...
	I0815 23:06:11.058425   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.key: {Name:mk74afd4b015ae7865aff62b69ff6fef7f3be912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.058525   20724 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key.f0322d93
	I0815 23:06:11.058546   20724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt.f0322d93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.72]
	I0815 23:06:11.391778   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt.f0322d93 ...
	I0815 23:06:11.391805   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt.f0322d93: {Name:mk9418b889dd16c96ce3fb0bd06373511a63ef74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.391980   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key.f0322d93 ...
	I0815 23:06:11.391998   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key.f0322d93: {Name:mkb80e5a4d71da08a29f1a1e1ce7880ea6fdcd85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.392097   20724 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt.f0322d93 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt
	I0815 23:06:11.392170   20724 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key.f0322d93 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key
	I0815 23:06:11.392216   20724 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.key
	I0815 23:06:11.392232   20724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.crt with IP's: []
	I0815 23:06:11.595480   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.crt ...
	I0815 23:06:11.595512   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.crt: {Name:mk3ee1a0e073c1bcf6aa89f1933fd66ca093a883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.595675   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.key ...
	I0815 23:06:11.595686   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.key: {Name:mk8f27836cb30023d270b9e91aad5ed309ae2b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.595851   20724 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:06:11.595883   20724 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:06:11.595908   20724 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:06:11.595931   20724 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:06:11.596479   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:06:11.627542   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:06:11.652808   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:06:11.677082   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:06:11.702600   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 23:06:11.727819   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 23:06:11.753020   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:06:11.778942   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:06:11.804017   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:06:11.831711   20724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 23:06:11.859362   20724 ssh_runner.go:195] Run: openssl version
	I0815 23:06:11.865854   20724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:06:11.877023   20724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:06:11.881891   20724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:06:11.881966   20724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:06:11.891176   20724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:06:11.903120   20724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:06:11.907440   20724 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 23:06:11.907493   20724 kubeadm.go:392] StartCluster: {Name:addons-517040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:06:11.907581   20724 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 23:06:11.907630   20724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 23:06:11.945176   20724 cri.go:89] found id: ""
	I0815 23:06:11.945238   20724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 23:06:11.956098   20724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 23:06:11.966836   20724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 23:06:11.977278   20724 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 23:06:11.977298   20724 kubeadm.go:157] found existing configuration files:
	
	I0815 23:06:11.977349   20724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 23:06:11.987186   20724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 23:06:11.987241   20724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 23:06:11.997681   20724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 23:06:12.007159   20724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 23:06:12.007220   20724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 23:06:12.017495   20724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 23:06:12.027205   20724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 23:06:12.027251   20724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 23:06:12.037410   20724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 23:06:12.047285   20724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 23:06:12.047354   20724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 23:06:12.057717   20724 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 23:06:12.110771   20724 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 23:06:12.110846   20724 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 23:06:12.215755   20724 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 23:06:12.215856   20724 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 23:06:12.215967   20724 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 23:06:12.227259   20724 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 23:06:12.321650   20724 out.go:235]   - Generating certificates and keys ...
	I0815 23:06:12.321788   20724 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 23:06:12.321854   20724 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 23:06:12.496664   20724 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 23:06:12.640291   20724 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 23:06:12.973409   20724 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 23:06:13.160236   20724 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 23:06:13.350509   20724 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 23:06:13.350677   20724 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-517040 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0815 23:06:13.524485   20724 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 23:06:13.524640   20724 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-517040 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0815 23:06:13.907858   20724 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 23:06:14.056732   20724 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 23:06:14.274345   20724 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 23:06:14.274485   20724 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 23:06:14.406585   20724 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 23:06:14.617253   20724 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 23:06:14.962491   20724 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 23:06:15.190333   20724 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 23:06:15.305454   20724 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 23:06:15.305984   20724 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 23:06:15.308554   20724 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 23:06:15.310909   20724 out.go:235]   - Booting up control plane ...
	I0815 23:06:15.311013   20724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 23:06:15.311101   20724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 23:06:15.311169   20724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 23:06:15.326432   20724 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 23:06:15.333932   20724 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 23:06:15.334021   20724 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 23:06:15.482468   20724 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 23:06:15.482639   20724 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 23:06:15.983865   20724 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.010486ms
	I0815 23:06:15.983985   20724 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 23:06:21.482798   20724 kubeadm.go:310] [api-check] The API server is healthy after 5.502068291s
	I0815 23:06:21.495713   20724 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 23:06:21.512179   20724 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 23:06:21.553546   20724 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 23:06:21.553760   20724 kubeadm.go:310] [mark-control-plane] Marking the node addons-517040 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 23:06:21.566832   20724 kubeadm.go:310] [bootstrap-token] Using token: oyfjn4.4onx040evbjr30d7
	I0815 23:06:21.568107   20724 out.go:235]   - Configuring RBAC rules ...
	I0815 23:06:21.568244   20724 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 23:06:21.575869   20724 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 23:06:21.587706   20724 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 23:06:21.592536   20724 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 23:06:21.596190   20724 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 23:06:21.599812   20724 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 23:06:21.889737   20724 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 23:06:22.328573   20724 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 23:06:22.889112   20724 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 23:06:22.890075   20724 kubeadm.go:310] 
	I0815 23:06:22.890149   20724 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 23:06:22.890158   20724 kubeadm.go:310] 
	I0815 23:06:22.890254   20724 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 23:06:22.890266   20724 kubeadm.go:310] 
	I0815 23:06:22.890292   20724 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 23:06:22.890362   20724 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 23:06:22.890433   20724 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 23:06:22.890444   20724 kubeadm.go:310] 
	I0815 23:06:22.890512   20724 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 23:06:22.890522   20724 kubeadm.go:310] 
	I0815 23:06:22.890602   20724 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 23:06:22.890624   20724 kubeadm.go:310] 
	I0815 23:06:22.890701   20724 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 23:06:22.890809   20724 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 23:06:22.890900   20724 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 23:06:22.890909   20724 kubeadm.go:310] 
	I0815 23:06:22.891041   20724 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 23:06:22.891148   20724 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 23:06:22.891159   20724 kubeadm.go:310] 
	I0815 23:06:22.891260   20724 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oyfjn4.4onx040evbjr30d7 \
	I0815 23:06:22.891388   20724 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0815 23:06:22.891417   20724 kubeadm.go:310] 	--control-plane 
	I0815 23:06:22.891422   20724 kubeadm.go:310] 
	I0815 23:06:22.891528   20724 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 23:06:22.891539   20724 kubeadm.go:310] 
	I0815 23:06:22.891653   20724 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oyfjn4.4onx040evbjr30d7 \
	I0815 23:06:22.891808   20724 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0815 23:06:22.892791   20724 kubeadm.go:310] W0815 23:06:12.091310     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 23:06:22.893205   20724 kubeadm.go:310] W0815 23:06:12.092085     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 23:06:22.893358   20724 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 23:06:22.893415   20724 cni.go:84] Creating CNI manager for ""
	I0815 23:06:22.893428   20724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 23:06:22.895286   20724 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 23:06:22.897034   20724 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 23:06:22.908911   20724 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 23:06:22.929168   20724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 23:06:22.929253   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:22.929280   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-517040 minikube.k8s.io/updated_at=2024_08_15T23_06_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=addons-517040 minikube.k8s.io/primary=true
	I0815 23:06:23.080016   20724 ops.go:34] apiserver oom_adj: -16
	I0815 23:06:23.080168   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:23.580268   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:24.080871   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:24.580819   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:25.080853   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:25.580829   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:26.080783   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:26.580806   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:27.081084   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:27.194036   20724 kubeadm.go:1113] duration metric: took 4.264833487s to wait for elevateKubeSystemPrivileges
	I0815 23:06:27.194074   20724 kubeadm.go:394] duration metric: took 15.286584162s to StartCluster
	I0815 23:06:27.194097   20724 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:27.194240   20724 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:06:27.194718   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:27.194953   20724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 23:06:27.194980   20724 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:06:27.195054   20724 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 23:06:27.195150   20724 addons.go:69] Setting yakd=true in profile "addons-517040"
	I0815 23:06:27.195156   20724 addons.go:69] Setting helm-tiller=true in profile "addons-517040"
	I0815 23:06:27.195165   20724 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-517040"
	I0815 23:06:27.195182   20724 addons.go:234] Setting addon yakd=true in "addons-517040"
	I0815 23:06:27.195176   20724 addons.go:69] Setting ingress=true in profile "addons-517040"
	I0815 23:06:27.195187   20724 config.go:182] Loaded profile config "addons-517040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:06:27.195200   20724 addons.go:69] Setting cloud-spanner=true in profile "addons-517040"
	I0815 23:06:27.195210   20724 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-517040"
	I0815 23:06:27.195214   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195214   20724 addons.go:234] Setting addon ingress=true in "addons-517040"
	I0815 23:06:27.195218   20724 addons.go:234] Setting addon cloud-spanner=true in "addons-517040"
	I0815 23:06:27.195191   20724 addons.go:234] Setting addon helm-tiller=true in "addons-517040"
	I0815 23:06:27.195243   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195250   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195254   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195258   20724 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-517040"
	I0815 23:06:27.195279   20724 addons.go:69] Setting default-storageclass=true in profile "addons-517040"
	I0815 23:06:27.195295   20724 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-517040"
	I0815 23:06:27.195314   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195314   20724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-517040"
	I0815 23:06:27.195347   20724 addons.go:69] Setting registry=true in profile "addons-517040"
	I0815 23:06:27.195365   20724 addons.go:234] Setting addon registry=true in "addons-517040"
	I0815 23:06:27.195383   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195250   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195637   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195648   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195659   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195660   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195671   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195676   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195743   20724 addons.go:69] Setting inspektor-gadget=true in profile "addons-517040"
	I0815 23:06:27.195743   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195752   20724 addons.go:69] Setting gcp-auth=true in profile "addons-517040"
	I0815 23:06:27.195755   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195763   20724 addons.go:234] Setting addon inspektor-gadget=true in "addons-517040"
	I0815 23:06:27.195766   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195769   20724 mustload.go:65] Loading cluster: addons-517040
	I0815 23:06:27.195784   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195785   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195838   20724 addons.go:69] Setting metrics-server=true in profile "addons-517040"
	I0815 23:06:27.195844   20724 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-517040"
	I0815 23:06:27.195859   20724 addons.go:234] Setting addon metrics-server=true in "addons-517040"
	I0815 23:06:27.195863   20724 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-517040"
	I0815 23:06:27.195871   20724 addons.go:69] Setting storage-provisioner=true in profile "addons-517040"
	I0815 23:06:27.195884   20724 addons.go:69] Setting volcano=true in profile "addons-517040"
	I0815 23:06:27.195888   20724 addons.go:234] Setting addon storage-provisioner=true in "addons-517040"
	I0815 23:06:27.195901   20724 addons.go:69] Setting volumesnapshots=true in profile "addons-517040"
	I0815 23:06:27.195903   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195905   20724 addons.go:234] Setting addon volcano=true in "addons-517040"
	I0815 23:06:27.195918   20724 addons.go:234] Setting addon volumesnapshots=true in "addons-517040"
	I0815 23:06:27.195919   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195932   20724 config.go:182] Loaded profile config "addons-517040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:06:27.196019   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196060   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196099   20724 addons.go:69] Setting ingress-dns=true in profile "addons-517040"
	I0815 23:06:27.196134   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196143   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196109   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.196163   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196149   20724 addons.go:234] Setting addon ingress-dns=true in "addons-517040"
	I0815 23:06:27.196165   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196266   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.196241   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.196513   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196540   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196567   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196590   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196778   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.196804   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196883   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196912   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.197163   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.197189   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.197265   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.197291   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.197545   20724 out.go:177] * Verifying Kubernetes components...
	I0815 23:06:27.199164   20724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:06:27.216755   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I0815 23:06:27.217213   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0815 23:06:27.217396   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.217479   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I0815 23:06:27.217612   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.217815   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.218047   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.218061   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.218126   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.218141   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.218312   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.218332   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.218819   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.218834   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.219337   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.219874   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.219914   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.220544   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.220572   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.220913   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.221073   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.221499   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.221526   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.222189   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.222220   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.232801   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41185
	I0815 23:06:27.233477   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.234241   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.234262   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.234676   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.235316   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.235355   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.236208   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I0815 23:06:27.236729   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.237328   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.237346   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.237741   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.238392   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.238429   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.240170   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0815 23:06:27.240685   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.241199   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.241220   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.241584   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.242166   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.242203   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.252009   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
	I0815 23:06:27.252563   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.253245   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.253267   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.253751   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.254000   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.255359   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0815 23:06:27.255929   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.256723   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.256953   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:27.256975   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:27.258962   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0815 23:06:27.258963   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:27.259105   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.259113   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:27.259120   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.259123   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:27.259132   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:27.259139   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:27.259409   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:27.259442   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:27.259450   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	W0815 23:06:27.259534   20724 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 23:06:27.259758   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.259950   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.261048   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0815 23:06:27.261362   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.261443   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.261839   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.261880   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.262018   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.262028   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.262478   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.262528   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.262642   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.264423   20724 addons.go:234] Setting addon default-storageclass=true in "addons-517040"
	I0815 23:06:27.264468   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.264469   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.264874   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.264905   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.265426   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.265457   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.266658   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 23:06:27.268114   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 23:06:27.269178   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41731
	I0815 23:06:27.269364   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45037
	I0815 23:06:27.269730   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.270280   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.270297   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.270669   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.271175   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 23:06:27.271239   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.271259   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.271943   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33029
	I0815 23:06:27.272442   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.272996   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.273015   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.273412   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.273797   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 23:06:27.274014   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.274033   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.274235   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0815 23:06:27.274684   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.275268   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.275285   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.275707   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.275763   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0815 23:06:27.276129   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 23:06:27.276425   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.276450   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.276484   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.276977   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.276994   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.277021   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.277551   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.278147   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.278182   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.278810   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.278829   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.278903   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0815 23:06:27.279363   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.279970   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.279986   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.280382   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.280629   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.281644   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 23:06:27.282199   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.282775   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.282814   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.283033   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.283357   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.283385   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.284594   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 23:06:27.285925   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 23:06:27.287029   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 23:06:27.287050   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 23:06:27.287073   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.290652   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.291299   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.291321   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.291557   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.291783   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.291959   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.292152   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.292697   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0815 23:06:27.293426   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.295182   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I0815 23:06:27.295595   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.296122   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.296137   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.296583   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.297160   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.297198   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.297910   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.297935   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.299953   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0815 23:06:27.300563   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.300996   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.301011   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.301417   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.301620   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0815 23:06:27.301790   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.302102   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.302440   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.304094   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.304707   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.304724   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.305094   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.305273   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.307201   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.307666   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35729
	I0815 23:06:27.308153   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.308426   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0815 23:06:27.308893   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.308909   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.309251   20724 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0815 23:06:27.309497   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.310147   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.310184   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.310360   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.310393   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.310743   20724 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 23:06:27.310762   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 23:06:27.310781   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.311520   20724 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-517040"
	I0815 23:06:27.311562   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.311913   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.311951   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.312033   20724 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 23:06:27.312158   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44985
	I0815 23:06:27.312363   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.312384   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.313213   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.313214   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.313861   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.313879   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.314334   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.314528   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.314937   20724 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 23:06:27.316170   20724 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 23:06:27.316188   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 23:06:27.316208   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.316314   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.316623   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.318198   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I0815 23:06:27.318660   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.319265   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.319290   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.320219   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.320448   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.320453   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.321122   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.322086   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0815 23:06:27.322150   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.322460   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.322882   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.322906   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.323402   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.323972   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.324040   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.324203   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I0815 23:06:27.324478   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.324542   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.324617   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.324777   20724 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 23:06:27.324884   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.325169   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.325360   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.325523   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.325593   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.325789   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.326106   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.326130   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.326177   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41007
	I0815 23:06:27.326386   20724 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 23:06:27.326435   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.326446   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.326531   20724 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 23:06:27.326546   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 23:06:27.326562   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.326576   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.326629   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.326643   20724 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 23:06:27.326747   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.327263   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.327492   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.327548   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0815 23:06:27.327644   20724 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 23:06:27.327659   20724 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 23:06:27.327678   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.328354   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 23:06:27.328370   20724 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 23:06:27.328391   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.328538   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.329017   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.329038   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.329334   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.329504   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.330275   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.330712   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.330733   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.330893   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.330940   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0815 23:06:27.331144   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.331270   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.331305   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.331451   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.332276   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.332293   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.332745   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.332901   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.332954   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.333284   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I0815 23:06:27.333423   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.333447   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.333605   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.333813   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.333827   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.333932   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.334225   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.334363   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.334556   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.335196   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.335214   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.335233   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.335926   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.335932   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.335939   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0815 23:06:27.335983   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.336010   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0815 23:06:27.336030   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.336073   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.336313   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.336329   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.336676   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.336412   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.336460   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.336478   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.337023   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.337104   20724 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 23:06:27.337251   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.337266   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.337327   20724 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 23:06:27.337340   20724 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 23:06:27.337355   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.337393   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.337404   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.337568   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.337583   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.337921   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.338119   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.338373   20724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 23:06:27.338378   20724 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 23:06:27.338373   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 23:06:27.338391   20724 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 23:06:27.338465   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.338979   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.339549   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.339627   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 23:06:27.339636   20724 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 23:06:27.339648   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.339600   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.340869   20724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 23:06:27.342326   20724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 23:06:27.343930   20724 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 23:06:27.343953   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 23:06:27.345923   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.346013   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346045   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.346076   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346093   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.346144   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346167   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.346189   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346204   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.346244   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.346282   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346299   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.346313   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.346316   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346330   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.346373   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.346455   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.346492   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.346617   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.346674   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.346722   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.346967   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.347249   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.347459   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.347715   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.348482   20724 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 23:06:27.349298   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.349415   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.349699   20724 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 23:06:27.349716   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 23:06:27.349719   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.349734   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.349739   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.349837   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.350044   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.350181   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.350600   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.351092   20724 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 23:06:27.352465   20724 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 23:06:27.352477   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 23:06:27.352490   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.353189   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.353671   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.353708   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.353883   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.354044   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.354198   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.354311   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.356048   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.356084   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.356105   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.356168   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.356354   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.356509   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.356672   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.357152   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0815 23:06:27.357473   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.358010   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.358037   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.358365   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.358521   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43299
	I0815 23:06:27.358662   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.358916   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.359393   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.359415   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.359795   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.361944   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.361994   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.363527   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.363840   20724 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 23:06:27.365155   20724 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 23:06:27.365241   20724 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 23:06:27.365256   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 23:06:27.365275   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.367892   20724 out.go:177]   - Using image docker.io/busybox:stable
	I0815 23:06:27.368872   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.369322   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.369344   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.369410   20724 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 23:06:27.369424   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 23:06:27.369440   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.369504   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.369663   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.369793   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.369953   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.372333   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.372659   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.372684   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.372850   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.373136   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.373280   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.373474   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.664327   20724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:06:27.664410   20724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 23:06:27.724251   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 23:06:27.746343   20724 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 23:06:27.746360   20724 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 23:06:27.788769   20724 node_ready.go:35] waiting up to 6m0s for node "addons-517040" to be "Ready" ...
	I0815 23:06:27.789613   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 23:06:27.789633   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 23:06:27.791802   20724 node_ready.go:49] node "addons-517040" has status "Ready":"True"
	I0815 23:06:27.791818   20724 node_ready.go:38] duration metric: took 3.025383ms for node "addons-517040" to be "Ready" ...
	I0815 23:06:27.791826   20724 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:06:27.799739   20724 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-frrxx" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:27.839141   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 23:06:27.892847   20724 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 23:06:27.892874   20724 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 23:06:27.908151   20724 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 23:06:27.908175   20724 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 23:06:27.908943   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 23:06:27.957701   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 23:06:27.972085   20724 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 23:06:27.972105   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 23:06:27.977647   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 23:06:27.984056   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 23:06:27.984079   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 23:06:27.985289   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 23:06:27.985305   20724 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 23:06:27.994984   20724 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 23:06:27.995007   20724 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 23:06:27.996823   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 23:06:28.056753   20724 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 23:06:28.056785   20724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 23:06:28.061361   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 23:06:28.105306   20724 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 23:06:28.105322   20724 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 23:06:28.170545   20724 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 23:06:28.170572   20724 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 23:06:28.171829   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 23:06:28.171842   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 23:06:28.187642   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 23:06:28.192318   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 23:06:28.192344   20724 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 23:06:28.204582   20724 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 23:06:28.204603   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 23:06:28.382770   20724 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 23:06:28.382792   20724 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 23:06:28.386261   20724 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 23:06:28.386288   20724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 23:06:28.387126   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 23:06:28.387141   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 23:06:28.434875   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 23:06:28.434894   20724 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 23:06:28.454616   20724 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 23:06:28.454641   20724 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 23:06:28.460499   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 23:06:28.583555   20724 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 23:06:28.583580   20724 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 23:06:28.589387   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 23:06:28.589407   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 23:06:28.616404   20724 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 23:06:28.616428   20724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 23:06:28.738774   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 23:06:28.744813   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 23:06:28.744831   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 23:06:28.785786   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 23:06:28.785806   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 23:06:28.850585   20724 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 23:06:28.850618   20724 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 23:06:28.960740   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 23:06:28.988827   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 23:06:28.988858   20724 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 23:06:29.204574   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 23:06:29.204599   20724 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 23:06:29.318008   20724 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 23:06:29.318036   20724 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 23:06:29.403661   20724 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 23:06:29.403680   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 23:06:29.478833   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 23:06:29.478855   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 23:06:29.602942   20724 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 23:06:29.602965   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 23:06:29.737502   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 23:06:29.807082   20724 pod_ready.go:103] pod "coredns-6f6b679f8f-frrxx" in "kube-system" namespace has status "Ready":"False"
	I0815 23:06:29.826176   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 23:06:29.826196   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 23:06:29.881818   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 23:06:30.097328   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 23:06:30.097354   20724 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 23:06:30.190703   20724 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.526260081s)
	I0815 23:06:30.190741   20724 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 23:06:30.190793   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.466513863s)
	I0815 23:06:30.190853   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:30.190869   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:30.191269   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:30.191271   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:30.191290   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:30.191300   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:30.191312   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:30.191545   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:30.191599   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:30.191618   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:30.204624   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:30.204644   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:30.204883   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:30.204904   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:30.392925   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 23:06:30.695021   20724 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-517040" context rescaled to 1 replicas
	I0815 23:06:31.421309   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.582127006s)
	I0815 23:06:31.421365   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:31.421378   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:31.421753   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:31.421754   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:31.421785   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:31.421799   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:31.421816   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:31.422049   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:31.422063   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:31.977761   20724 pod_ready.go:93] pod "coredns-6f6b679f8f-frrxx" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:31.977784   20724 pod_ready.go:82] duration metric: took 4.17800836s for pod "coredns-6f6b679f8f-frrxx" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:31.977795   20724 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mtm8z" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:32.196153   20724 pod_ready.go:93] pod "coredns-6f6b679f8f-mtm8z" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:32.196175   20724 pod_ready.go:82] duration metric: took 218.373877ms for pod "coredns-6f6b679f8f-mtm8z" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:32.196184   20724 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:33.330871   20724 pod_ready.go:93] pod "etcd-addons-517040" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:33.330893   20724 pod_ready.go:82] duration metric: took 1.13470316s for pod "etcd-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:33.330904   20724 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:33.894328   20724 pod_ready.go:93] pod "kube-apiserver-addons-517040" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:33.894349   20724 pod_ready.go:82] duration metric: took 563.438426ms for pod "kube-apiserver-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:33.894360   20724 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:34.379799   20724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 23:06:34.379843   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:34.382681   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:34.383127   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:34.383156   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:34.383363   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:34.383554   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:34.383690   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:34.383859   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:34.965508   20724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 23:06:35.240997   20724 addons.go:234] Setting addon gcp-auth=true in "addons-517040"
	I0815 23:06:35.241056   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:35.241458   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:35.241488   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:35.256751   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0815 23:06:35.257219   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:35.257781   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:35.257809   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:35.258160   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:35.258773   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:35.258806   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:35.274597   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I0815 23:06:35.275039   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:35.275554   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:35.275582   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:35.275975   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:35.276180   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:35.277931   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:35.278184   20724 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 23:06:35.278212   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:35.280818   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:35.281237   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:35.281289   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:35.281513   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:35.281715   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:35.281902   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:35.282049   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:35.678531   20724 pod_ready.go:93] pod "kube-controller-manager-addons-517040" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:35.678562   20724 pod_ready.go:82] duration metric: took 1.78419486s for pod "kube-controller-manager-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.678579   20724 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cg5sj" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.774000   20724 pod_ready.go:93] pod "kube-proxy-cg5sj" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:35.774026   20724 pod_ready.go:82] duration metric: took 95.438465ms for pod "kube-proxy-cg5sj" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.774039   20724 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.826526   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.917546473s)
	I0815 23:06:35.826584   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.826596   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.826628   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.868895718s)
	I0815 23:06:35.826662   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.826677   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.826685   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.849013533s)
	I0815 23:06:35.826707   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.826718   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.826780   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.829930542s)
	I0815 23:06:35.826819   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.826835   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827022   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827047   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827056   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827066   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827072   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827076   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827080   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827081   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827095   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827086   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827142   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827153   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827161   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.765776081s)
	I0815 23:06:35.827169   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827177   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827177   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827185   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827186   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827196   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827257   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.639577874s)
	I0815 23:06:35.827282   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827291   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827349   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.366826699s)
	I0815 23:06:35.827362   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827370   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827451   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827458   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.088659648s)
	I0815 23:06:35.827470   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827476   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827478   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827484   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827493   20724 addons.go:475] Verifying addon ingress=true in "addons-517040"
	I0815 23:06:35.827542   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.866778353s)
	I0815 23:06:35.827598   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827616   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827637   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827644   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827652   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827659   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827736   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827760   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827778   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827789   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827797   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827802   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827805   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827810   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827813   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827817   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.828168   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.828189   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.828206   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.828218   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.828580   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.828625   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.828633   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.828732   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.828752   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.828764   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.828784   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.828791   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.829212   20724 out.go:177] * Verifying ingress addon...
	I0815 23:06:35.829965   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.829982   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.829990   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.829998   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.830049   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.830067   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.830077   20724 addons.go:475] Verifying addon metrics-server=true in "addons-517040"
	I0815 23:06:35.830928   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.830963   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.830970   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.831642   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.831660   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.831724   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.831778   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.831785   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.832576   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.832595   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.832604   20724 addons.go:475] Verifying addon registry=true in "addons-517040"
	I0815 23:06:35.827557   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.833018   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.833214   20724 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 23:06:35.833904   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.833925   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.833937   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.833954   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.833966   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.834173   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.834211   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.834223   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.834340   20724 out.go:177] * Verifying registry addon...
	I0815 23:06:35.835574   20724 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-517040 service yakd-dashboard -n yakd-dashboard
	
	I0815 23:06:35.836315   20724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 23:06:35.916088   20724 pod_ready.go:93] pod "kube-scheduler-addons-517040" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:35.916119   20724 pod_ready.go:82] duration metric: took 142.071089ms for pod "kube-scheduler-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.916130   20724 pod_ready.go:39] duration metric: took 8.124291977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:06:35.916147   20724 api_server.go:52] waiting for apiserver process to appear ...
	I0815 23:06:35.916207   20724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:06:35.963068   20724 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 23:06:35.963088   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:35.963614   20724 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 23:06:35.963636   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:36.080785   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:36.080809   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:36.081083   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:36.081104   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:36.081121   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:36.453925   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:36.454448   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:36.632917   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.751058543s)
	I0815 23:06:36.632971   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:36.632986   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:36.633054   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.895495311s)
	W0815 23:06:36.633096   20724 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 23:06:36.633134   20724 retry.go:31] will retry after 302.814585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 23:06:36.633325   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:36.633341   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:36.633351   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:36.633365   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:36.633605   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:36.633621   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:36.633630   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:36.881432   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:36.882348   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:36.936353   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 23:06:37.340028   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:37.344305   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:37.864864   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.471881309s)
	I0815 23:06:37.864920   20724 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.586712652s)
	I0815 23:06:37.864954   20724 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.948726479s)
	I0815 23:06:37.864980   20724 api_server.go:72] duration metric: took 10.669971599s to wait for apiserver process to appear ...
	I0815 23:06:37.864989   20724 api_server.go:88] waiting for apiserver healthz status ...
	I0815 23:06:37.864922   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:37.865116   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:37.865007   20724 api_server.go:253] Checking apiserver healthz at https://192.168.39.72:8443/healthz ...
	I0815 23:06:37.865370   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:37.865398   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:37.865421   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:37.865448   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:37.865399   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:37.865764   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:37.865770   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:37.865780   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:37.865792   20724 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-517040"
	I0815 23:06:37.866818   20724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 23:06:37.867909   20724 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 23:06:37.869530   20724 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 23:06:37.870532   20724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 23:06:37.870865   20724 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 23:06:37.870879   20724 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 23:06:37.899962   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:37.900128   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:37.909431   20724 api_server.go:279] https://192.168.39.72:8443/healthz returned 200:
	ok
	I0815 23:06:37.924754   20724 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 23:06:37.924785   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:37.925935   20724 api_server.go:141] control plane version: v1.31.0
	I0815 23:06:37.925965   20724 api_server.go:131] duration metric: took 60.968126ms to wait for apiserver health ...
	I0815 23:06:37.925976   20724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 23:06:37.979429   20724 system_pods.go:59] 19 kube-system pods found
	I0815 23:06:37.979469   20724 system_pods.go:61] "coredns-6f6b679f8f-frrxx" [4c35a93c-3c9b-4cea-92cb-531486f62524] Running
	I0815 23:06:37.979477   20724 system_pods.go:61] "coredns-6f6b679f8f-mtm8z" [d8f0df8d-c410-42be-8666-0163180a0538] Running
	I0815 23:06:37.979486   20724 system_pods.go:61] "csi-hostpath-attacher-0" [01dbe91e-f366-491a-8be7-b218b193563b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 23:06:37.979495   20724 system_pods.go:61] "csi-hostpath-resizer-0" [61b797df-07bd-4c06-b75c-c53c45041656] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 23:06:37.979511   20724 system_pods.go:61] "csi-hostpathplugin-czvm7" [54f95c32-b72c-4a0a-8cb5-ef390efa1828] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 23:06:37.979522   20724 system_pods.go:61] "etcd-addons-517040" [a383b556-e08e-4662-a12b-a216b451adae] Running
	I0815 23:06:37.979528   20724 system_pods.go:61] "kube-apiserver-addons-517040" [8cb5a50a-7182-4950-8536-1c9096d610b6] Running
	I0815 23:06:37.979533   20724 system_pods.go:61] "kube-controller-manager-addons-517040" [2621a39a-9e97-4529-9f42-14a71926f35b] Running
	I0815 23:06:37.979542   20724 system_pods.go:61] "kube-ingress-dns-minikube" [53e62c76-994b-4d37-9ac3-fada87d1d0c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0815 23:06:37.979551   20724 system_pods.go:61] "kube-proxy-cg5sj" [ede8c3a9-8c4a-44a9-b8d9-6db190ceae87] Running
	I0815 23:06:37.979560   20724 system_pods.go:61] "kube-scheduler-addons-517040" [b7257417-cd09-4f5b-ae64-4c2109240535] Running
	I0815 23:06:37.979572   20724 system_pods.go:61] "metrics-server-8988944d9-4mjqf" [f4e01981-c592-4b6b-a285-4046cf8c68c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 23:06:37.979584   20724 system_pods.go:61] "nvidia-device-plugin-daemonset-62jx9" [e1e1e2d3-eb2b-497d-9a69-d33c5428ad96] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0815 23:06:37.979595   20724 system_pods.go:61] "registry-6fb4cdfc84-g5m9x" [3fa1cd07-9f55-41bb-85a9-a958de7f5cbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 23:06:37.979603   20724 system_pods.go:61] "registry-proxy-h2mkz" [22fe5d24-ea50-43c5-a4bf-ee443e253852] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 23:06:37.979613   20724 system_pods.go:61] "snapshot-controller-56fcc65765-pldzx" [dadb75f8-7f43-4070-a2a0-42efc5ee3c44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 23:06:37.979625   20724 system_pods.go:61] "snapshot-controller-56fcc65765-ttz7q" [c029e2be-05d3-4a1b-8689-f32802401e3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 23:06:37.979633   20724 system_pods.go:61] "storage-provisioner" [a4cede15-f6e5-4422-a61f-260751693d94] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 23:06:37.979644   20724 system_pods.go:61] "tiller-deploy-b48cc5f79-frmxp" [662d1936-5dbb-49d3-a200-0d9f9d807bfe] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0815 23:06:37.979655   20724 system_pods.go:74] duration metric: took 53.67235ms to wait for pod list to return data ...
	I0815 23:06:37.979667   20724 default_sa.go:34] waiting for default service account to be created ...
	I0815 23:06:38.001036   20724 default_sa.go:45] found service account: "default"
	I0815 23:06:38.001072   20724 default_sa.go:55] duration metric: took 21.395911ms for default service account to be created ...
	I0815 23:06:38.001085   20724 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 23:06:38.029108   20724 system_pods.go:86] 19 kube-system pods found
	I0815 23:06:38.029139   20724 system_pods.go:89] "coredns-6f6b679f8f-frrxx" [4c35a93c-3c9b-4cea-92cb-531486f62524] Running
	I0815 23:06:38.029145   20724 system_pods.go:89] "coredns-6f6b679f8f-mtm8z" [d8f0df8d-c410-42be-8666-0163180a0538] Running
	I0815 23:06:38.029153   20724 system_pods.go:89] "csi-hostpath-attacher-0" [01dbe91e-f366-491a-8be7-b218b193563b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 23:06:38.029160   20724 system_pods.go:89] "csi-hostpath-resizer-0" [61b797df-07bd-4c06-b75c-c53c45041656] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 23:06:38.029166   20724 system_pods.go:89] "csi-hostpathplugin-czvm7" [54f95c32-b72c-4a0a-8cb5-ef390efa1828] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 23:06:38.029171   20724 system_pods.go:89] "etcd-addons-517040" [a383b556-e08e-4662-a12b-a216b451adae] Running
	I0815 23:06:38.029175   20724 system_pods.go:89] "kube-apiserver-addons-517040" [8cb5a50a-7182-4950-8536-1c9096d610b6] Running
	I0815 23:06:38.029179   20724 system_pods.go:89] "kube-controller-manager-addons-517040" [2621a39a-9e97-4529-9f42-14a71926f35b] Running
	I0815 23:06:38.029186   20724 system_pods.go:89] "kube-ingress-dns-minikube" [53e62c76-994b-4d37-9ac3-fada87d1d0c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0815 23:06:38.029191   20724 system_pods.go:89] "kube-proxy-cg5sj" [ede8c3a9-8c4a-44a9-b8d9-6db190ceae87] Running
	I0815 23:06:38.029195   20724 system_pods.go:89] "kube-scheduler-addons-517040" [b7257417-cd09-4f5b-ae64-4c2109240535] Running
	I0815 23:06:38.029202   20724 system_pods.go:89] "metrics-server-8988944d9-4mjqf" [f4e01981-c592-4b6b-a285-4046cf8c68c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 23:06:38.029213   20724 system_pods.go:89] "nvidia-device-plugin-daemonset-62jx9" [e1e1e2d3-eb2b-497d-9a69-d33c5428ad96] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0815 23:06:38.029222   20724 system_pods.go:89] "registry-6fb4cdfc84-g5m9x" [3fa1cd07-9f55-41bb-85a9-a958de7f5cbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 23:06:38.029227   20724 system_pods.go:89] "registry-proxy-h2mkz" [22fe5d24-ea50-43c5-a4bf-ee443e253852] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 23:06:38.029232   20724 system_pods.go:89] "snapshot-controller-56fcc65765-pldzx" [dadb75f8-7f43-4070-a2a0-42efc5ee3c44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 23:06:38.029239   20724 system_pods.go:89] "snapshot-controller-56fcc65765-ttz7q" [c029e2be-05d3-4a1b-8689-f32802401e3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 23:06:38.029245   20724 system_pods.go:89] "storage-provisioner" [a4cede15-f6e5-4422-a61f-260751693d94] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 23:06:38.029250   20724 system_pods.go:89] "tiller-deploy-b48cc5f79-frmxp" [662d1936-5dbb-49d3-a200-0d9f9d807bfe] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0815 23:06:38.029259   20724 system_pods.go:126] duration metric: took 28.168446ms to wait for k8s-apps to be running ...
	I0815 23:06:38.029266   20724 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 23:06:38.029309   20724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:06:38.106718   20724 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 23:06:38.106745   20724 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 23:06:38.245700   20724 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 23:06:38.245721   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 23:06:38.321313   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 23:06:38.337074   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:38.340116   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:38.375655   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:38.837459   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:38.839489   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:38.875875   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:39.337973   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:39.340838   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:39.377564   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:39.626804   20724 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.597472157s)
	I0815 23:06:39.626846   20724 system_svc.go:56] duration metric: took 1.597577114s WaitForService to wait for kubelet
	I0815 23:06:39.626858   20724 kubeadm.go:582] duration metric: took 12.43184741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:06:39.626881   20724 node_conditions.go:102] verifying NodePressure condition ...
	I0815 23:06:39.626813   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.690407741s)
	I0815 23:06:39.626960   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:39.626980   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:39.627368   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:39.627384   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:39.627403   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:39.627411   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:39.627634   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:39.627653   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:39.627667   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:39.630491   20724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:06:39.630512   20724 node_conditions.go:123] node cpu capacity is 2
	I0815 23:06:39.630526   20724 node_conditions.go:105] duration metric: took 3.638741ms to run NodePressure ...
	I0815 23:06:39.630539   20724 start.go:241] waiting for startup goroutines ...
	I0815 23:06:39.894802   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:39.909389   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:39.909613   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:40.072048   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.750694219s)
	I0815 23:06:40.072102   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:40.072117   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:40.072402   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:40.072423   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:40.072426   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:40.072433   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:40.072442   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:40.072698   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:40.072729   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:40.072742   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:40.075046   20724 addons.go:475] Verifying addon gcp-auth=true in "addons-517040"
	I0815 23:06:40.077673   20724 out.go:177] * Verifying gcp-auth addon...
	I0815 23:06:40.079543   20724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 23:06:40.093612   20724 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 23:06:40.093637   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:40.338467   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:40.341267   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:40.377709   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:40.584233   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:40.838616   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:40.841721   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:40.875508   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:41.083782   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:41.349458   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:41.349870   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:41.375828   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:41.582636   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:41.838313   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:41.839952   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:41.874906   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:42.082589   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:42.337329   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:42.338669   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:42.375604   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:42.698862   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:42.838291   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:42.839898   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:42.876494   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:43.083914   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:43.338263   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:43.339842   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:43.376566   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:43.584036   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:43.838159   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:43.839938   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:43.875682   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:44.083404   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:44.337126   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:44.340899   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:44.375677   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:44.583474   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:44.837797   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:44.839639   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:44.875592   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:45.083241   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:45.337952   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:45.340135   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:45.376667   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:45.583972   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:45.837687   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:45.839347   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:45.875457   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:46.083487   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:46.343347   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:46.343410   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:46.443138   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:46.584108   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:46.838944   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:46.840733   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:46.875332   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:47.082989   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:47.337382   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:47.339503   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:47.375968   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:47.584314   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:47.838412   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:47.840156   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:47.875033   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:48.083534   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:48.336949   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:48.340008   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:48.376192   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:48.583998   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:49.302644   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:49.304769   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:49.304925   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:49.305029   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:49.337375   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:49.339836   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:49.375309   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:49.582807   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:49.837748   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:49.839147   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:49.875141   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:50.083372   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:50.336939   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:50.339292   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:50.375397   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:50.583954   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:50.838436   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:50.840068   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:50.876284   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:51.083964   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:51.340251   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:51.345444   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:51.376720   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:51.584231   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:51.838127   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:51.840165   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:51.876163   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:52.083037   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:52.337075   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:52.340037   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:52.375454   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:52.584061   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:52.838525   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:52.841532   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:52.876035   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:53.085277   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:53.340215   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:53.342882   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:53.375539   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:53.583526   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:53.839537   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:53.842395   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:53.888568   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:54.083379   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:54.337476   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:54.339806   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:54.375852   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:54.582998   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:54.838321   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:54.839707   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:54.875426   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:55.082568   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:55.342692   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:55.342831   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:55.375584   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:55.583030   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:55.837335   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:55.838771   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:55.875665   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:56.083492   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:56.339511   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:56.341633   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:56.377904   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:56.582717   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:56.838031   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:56.839965   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:56.874676   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:57.084416   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:57.338527   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:57.340011   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:57.376230   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:57.584020   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:57.850160   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:57.850468   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:57.952917   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:58.083122   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:58.337855   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:58.339786   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:58.375904   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:58.583599   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:58.846281   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:58.859304   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:58.946349   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:59.084463   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:59.337272   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:59.340238   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:59.380055   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:59.583892   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:59.837944   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:59.839660   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:59.875693   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:00.085471   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:00.337231   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:00.339412   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:00.375349   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:00.583102   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:00.837517   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:00.843220   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:00.875008   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:01.083884   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:01.338493   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:01.340475   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:01.375339   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:01.582983   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:01.838727   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:01.840252   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:01.876040   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:02.083995   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:02.338754   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:02.340022   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:02.375492   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:02.583755   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:02.839195   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:02.840198   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:02.875193   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:03.087477   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:03.338351   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:03.342592   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:03.377073   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:03.584240   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:03.839122   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:03.840253   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:03.874638   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:04.083148   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:04.337707   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:04.339236   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:04.374695   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:04.582445   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:04.838371   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:04.839901   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:04.876544   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:05.082888   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:05.337380   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:05.339022   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:05.375009   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:05.596159   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:05.837788   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:05.841449   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:05.874473   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:06.083053   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:06.337436   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:06.339639   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:06.375890   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:06.582673   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:06.915031   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:06.915032   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:06.916904   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:07.083932   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:07.337677   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:07.342120   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:07.375803   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:07.584555   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:07.837796   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:07.839507   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:07.875659   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:08.084063   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:08.338110   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:08.340452   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:08.375544   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:08.583357   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:08.838314   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:08.840022   20724 kapi.go:107] duration metric: took 33.003705789s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 23:07:08.874755   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:09.083315   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:09.337139   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:09.376070   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:09.583415   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:09.841396   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:09.875252   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:10.082862   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:10.337874   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:10.375378   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:10.582843   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:10.837873   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:10.875296   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:11.083576   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:11.338360   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:11.375039   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:11.583926   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:11.837892   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:11.875818   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:12.204074   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:12.339930   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:12.375930   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:12.585041   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:12.842123   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:12.883875   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:13.084143   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:13.340598   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:13.377507   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:13.587453   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:13.837874   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:13.874768   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:14.083350   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:14.337049   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:14.375416   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:14.582636   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:14.837695   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:14.874913   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:15.083281   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:15.336939   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:15.375566   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:15.598071   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:16.037502   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:16.038349   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:16.083240   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:16.341118   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:16.376717   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:16.583513   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:16.837268   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:16.874695   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:17.082471   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:17.337882   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:17.375511   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:17.584105   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:17.838206   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:17.876613   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:18.083831   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:18.338094   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:18.375400   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:18.583227   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:18.838300   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:18.875873   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:19.083042   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:19.341865   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:19.374706   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:19.583481   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:19.839067   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:19.941183   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:20.085275   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:20.338476   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:20.374848   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:20.583717   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:20.845411   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:20.875309   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:21.082789   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:21.338435   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:21.376114   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:21.584004   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:21.838481   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:21.882078   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:22.084871   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:22.338372   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:22.377119   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:22.583762   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:22.837720   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:22.876144   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:23.084717   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:23.509268   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:23.511201   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:23.610452   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:23.838438   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:23.875180   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:24.114139   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:24.338827   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:24.375453   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:24.583516   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:24.837512   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:24.875357   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:25.086986   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:25.338386   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:25.375580   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:25.582946   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:25.837692   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:25.875784   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:26.083424   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:26.337298   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:26.375791   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:26.584102   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:26.847591   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:26.955867   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:27.084951   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:27.338047   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:27.377186   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:27.583647   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:27.838769   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:27.875256   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:28.082645   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:28.338068   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:28.382245   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:28.584435   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:28.838775   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:28.876175   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:29.084675   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:29.337470   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:29.374433   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:29.583272   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:29.838421   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:29.874702   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:30.085036   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:30.346771   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:30.379734   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:30.583120   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:30.839653   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:30.875905   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:31.083502   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:31.337441   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:31.377221   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:31.588774   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:31.837563   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:31.874686   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:32.083593   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:32.338196   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:32.376126   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:32.583636   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:32.838131   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:32.875813   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:33.083324   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:33.337572   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:33.374756   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:33.583714   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:33.837615   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:33.875502   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:34.083978   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:34.338862   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:34.375854   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:34.584381   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:34.838528   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:34.876830   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:35.083511   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:35.337924   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:35.375772   20724 kapi.go:107] duration metric: took 57.505237794s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 23:07:35.583479   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:35.838225   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:36.084266   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:36.338212   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:36.583991   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:36.838197   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:37.083084   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:37.337675   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:37.584197   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:37.838174   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:38.083771   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:38.338026   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:38.582624   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:38.838017   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:39.083573   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:39.337992   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:39.583444   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:39.837945   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:40.089391   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:40.337436   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:40.583802   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:40.837388   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:41.082669   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:41.338910   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:41.583084   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:41.838657   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:42.417331   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:42.417901   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:42.583759   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:42.837115   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:43.082635   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:43.337294   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:43.582841   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:43.837323   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:44.083282   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:44.339312   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:44.792306   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:44.837781   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:45.083231   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:45.338315   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:45.583343   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:45.839170   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:46.085225   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:46.337904   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:46.583822   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:46.838767   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:47.083670   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:47.339992   20724 kapi.go:107] duration metric: took 1m11.506768588s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 23:07:47.582653   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:48.083974   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:48.586584   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:49.083032   20724 kapi.go:107] duration metric: took 1m9.003486345s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 23:07:49.084992   20724 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-517040 cluster.
	I0815 23:07:49.086394   20724 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 23:07:49.088085   20724 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 23:07:49.089344   20724 out.go:177] * Enabled addons: default-storageclass, ingress-dns, cloud-spanner, metrics-server, helm-tiller, storage-provisioner, nvidia-device-plugin, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0815 23:07:49.090595   20724 addons.go:510] duration metric: took 1m21.895542853s for enable addons: enabled=[default-storageclass ingress-dns cloud-spanner metrics-server helm-tiller storage-provisioner nvidia-device-plugin yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0815 23:07:49.090628   20724 start.go:246] waiting for cluster config update ...
	I0815 23:07:49.090642   20724 start.go:255] writing updated cluster config ...
	I0815 23:07:49.090881   20724 ssh_runner.go:195] Run: rm -f paused
	I0815 23:07:49.141221   20724 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 23:07:49.142897   20724 out.go:177] * Done! kubectl is now configured to use "addons-517040" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.595684131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763492595657075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7aeb7d38-a391-4a6c-9bf5-785efe9c22ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.596223265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09c9c971-af6c-4b41-bf71-a282da6a4481 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.596279975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09c9c971-af6c-4b41-bf71-a282da6a4481 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.596592135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:587d41a9670abbcb893c28180eb87709353a9042620aede43cce0d4211917757,PodSandboxId:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723763483744240081,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7aaca643dfd9b8aa29f0cb69a7b703b8a0616b2bd0b0f757450625ea7a29456,PodSandboxId:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723763369708269617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee7004b42faa7de00787141893fa64b6dac5c9e7523e014b84096f5b32b7bf,PodSandboxId:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723763342415725772,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e94101b907952596fe98baa590eae3d59c7b0a9b547ff2676641e96dc7bfcd,PodSandboxId:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723763272566074617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438cc38e5eaec79c5f90be992c021ee29fc4df9113b15e7af234e4127d921026,PodSandboxId:506127dafcf1668faa10929ac35effb16e363c3a57734a9004e3b23ea60f0c82,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723763239989325995,Labels:map[string]string{io.kubernetes.container.name: patch,io.kuber
netes.pod.name: ingress-nginx-admission-patch-sps8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 98d522e4-782c-4f8c-bbc1-012e22ebc350,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8320e2c19284ee22b42e0d497623fb5776ceb46aace42caac83e4a90fd3bf456,PodSandboxId:3834c595a9e7b34647c82a5e9281384f9be4cbc67d1fb359b48a401a68cabfad,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723763239869992797,Labels:map[string]string{io.kuber
netes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7gqcb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3736048e-9b95-49aa-b71c-cffc14525fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676bba24daba93ea7fff4302bf45bc176524315dfa6ffdb45a4c8ce41f13738c,PodSandboxId:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:17237632323
09967707,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08,PodSandboxId:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f
5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723763194859223677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3,PodSandboxId:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687
f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723763190463277779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a,PodSandboxId:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Imag
eSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723763187945446864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918,PodSandboxId:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e5704
0741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723763176683271176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6,PodSandboxId:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e66
1e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723763176566199563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169,PodSandboxId:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723763176597813746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03,PodSandboxId:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723763176518253342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09c9c971-af6c-4b41-bf71-a282da6a4481 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.634243612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6832a74c-716d-4c14-a2db-e795ec23de76 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.634327828Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6832a74c-716d-4c14-a2db-e795ec23de76 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.636247749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab1627f9-0385-4df2-949c-3db5b37535fc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.637484211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763492637456539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab1627f9-0385-4df2-949c-3db5b37535fc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.638036250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fc82d81-9d12-4719-9d33-bb485e1e969f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.638100747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fc82d81-9d12-4719-9d33-bb485e1e969f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.638434887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:587d41a9670abbcb893c28180eb87709353a9042620aede43cce0d4211917757,PodSandboxId:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723763483744240081,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7aaca643dfd9b8aa29f0cb69a7b703b8a0616b2bd0b0f757450625ea7a29456,PodSandboxId:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723763369708269617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee7004b42faa7de00787141893fa64b6dac5c9e7523e014b84096f5b32b7bf,PodSandboxId:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723763342415725772,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e94101b907952596fe98baa590eae3d59c7b0a9b547ff2676641e96dc7bfcd,PodSandboxId:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723763272566074617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438cc38e5eaec79c5f90be992c021ee29fc4df9113b15e7af234e4127d921026,PodSandboxId:506127dafcf1668faa10929ac35effb16e363c3a57734a9004e3b23ea60f0c82,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723763239989325995,Labels:map[string]string{io.kubernetes.container.name: patch,io.kuber
netes.pod.name: ingress-nginx-admission-patch-sps8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 98d522e4-782c-4f8c-bbc1-012e22ebc350,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8320e2c19284ee22b42e0d497623fb5776ceb46aace42caac83e4a90fd3bf456,PodSandboxId:3834c595a9e7b34647c82a5e9281384f9be4cbc67d1fb359b48a401a68cabfad,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723763239869992797,Labels:map[string]string{io.kuber
netes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7gqcb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3736048e-9b95-49aa-b71c-cffc14525fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676bba24daba93ea7fff4302bf45bc176524315dfa6ffdb45a4c8ce41f13738c,PodSandboxId:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:17237632323
09967707,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08,PodSandboxId:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f
5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723763194859223677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3,PodSandboxId:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687
f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723763190463277779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a,PodSandboxId:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Imag
eSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723763187945446864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918,PodSandboxId:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e5704
0741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723763176683271176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6,PodSandboxId:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e66
1e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723763176566199563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169,PodSandboxId:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723763176597813746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03,PodSandboxId:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723763176518253342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fc82d81-9d12-4719-9d33-bb485e1e969f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.678361807Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de15f265-d523-4c85-a9d8-a6ba4a0d4161 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.678432187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de15f265-d523-4c85-a9d8-a6ba4a0d4161 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.679872039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=799e92af-310c-40a5-a206-1b596bf2b06a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.681161445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763492681130275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=799e92af-310c-40a5-a206-1b596bf2b06a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.681863040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2cb7b23-c5f2-4346-a8db-3f854cd31897 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.681941218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2cb7b23-c5f2-4346-a8db-3f854cd31897 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.682254291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:587d41a9670abbcb893c28180eb87709353a9042620aede43cce0d4211917757,PodSandboxId:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723763483744240081,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7aaca643dfd9b8aa29f0cb69a7b703b8a0616b2bd0b0f757450625ea7a29456,PodSandboxId:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723763369708269617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee7004b42faa7de00787141893fa64b6dac5c9e7523e014b84096f5b32b7bf,PodSandboxId:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723763342415725772,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e94101b907952596fe98baa590eae3d59c7b0a9b547ff2676641e96dc7bfcd,PodSandboxId:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723763272566074617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438cc38e5eaec79c5f90be992c021ee29fc4df9113b15e7af234e4127d921026,PodSandboxId:506127dafcf1668faa10929ac35effb16e363c3a57734a9004e3b23ea60f0c82,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723763239989325995,Labels:map[string]string{io.kubernetes.container.name: patch,io.kuber
netes.pod.name: ingress-nginx-admission-patch-sps8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 98d522e4-782c-4f8c-bbc1-012e22ebc350,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8320e2c19284ee22b42e0d497623fb5776ceb46aace42caac83e4a90fd3bf456,PodSandboxId:3834c595a9e7b34647c82a5e9281384f9be4cbc67d1fb359b48a401a68cabfad,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723763239869992797,Labels:map[string]string{io.kuber
netes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7gqcb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3736048e-9b95-49aa-b71c-cffc14525fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676bba24daba93ea7fff4302bf45bc176524315dfa6ffdb45a4c8ce41f13738c,PodSandboxId:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:17237632323
09967707,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08,PodSandboxId:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f
5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723763194859223677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3,PodSandboxId:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687
f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723763190463277779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a,PodSandboxId:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Imag
eSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723763187945446864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918,PodSandboxId:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e5704
0741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723763176683271176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6,PodSandboxId:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e66
1e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723763176566199563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169,PodSandboxId:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723763176597813746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03,PodSandboxId:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723763176518253342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2cb7b23-c5f2-4346-a8db-3f854cd31897 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.717182266Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4306c26-649f-47f4-b916-88da3ab345ef name=/runtime.v1.RuntimeService/Version
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.717269994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4306c26-649f-47f4-b916-88da3ab345ef name=/runtime.v1.RuntimeService/Version
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.718595155Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22c0b02d-a0ff-4808-9239-83b91892b69b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.720041591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763492720014126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22c0b02d-a0ff-4808-9239-83b91892b69b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.720792816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34f15484-8280-452f-bf2b-edf49c68d837 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.720869359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34f15484-8280-452f-bf2b-edf49c68d837 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:11:32 addons-517040 crio[686]: time="2024-08-15 23:11:32.721184539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:587d41a9670abbcb893c28180eb87709353a9042620aede43cce0d4211917757,PodSandboxId:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723763483744240081,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7aaca643dfd9b8aa29f0cb69a7b703b8a0616b2bd0b0f757450625ea7a29456,PodSandboxId:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723763369708269617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee7004b42faa7de00787141893fa64b6dac5c9e7523e014b84096f5b32b7bf,PodSandboxId:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723763342415725772,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e94101b907952596fe98baa590eae3d59c7b0a9b547ff2676641e96dc7bfcd,PodSandboxId:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723763272566074617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438cc38e5eaec79c5f90be992c021ee29fc4df9113b15e7af234e4127d921026,PodSandboxId:506127dafcf1668faa10929ac35effb16e363c3a57734a9004e3b23ea60f0c82,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723763239989325995,Labels:map[string]string{io.kubernetes.container.name: patch,io.kuber
netes.pod.name: ingress-nginx-admission-patch-sps8k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 98d522e4-782c-4f8c-bbc1-012e22ebc350,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8320e2c19284ee22b42e0d497623fb5776ceb46aace42caac83e4a90fd3bf456,PodSandboxId:3834c595a9e7b34647c82a5e9281384f9be4cbc67d1fb359b48a401a68cabfad,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723763239869992797,Labels:map[string]string{io.kuber
netes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7gqcb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3736048e-9b95-49aa-b71c-cffc14525fa8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676bba24daba93ea7fff4302bf45bc176524315dfa6ffdb45a4c8ce41f13738c,PodSandboxId:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:17237632323
09967707,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08,PodSandboxId:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f
5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723763194859223677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3,PodSandboxId:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687
f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723763190463277779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a,PodSandboxId:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Imag
eSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723763187945446864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918,PodSandboxId:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e5704
0741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723763176683271176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6,PodSandboxId:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e66
1e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723763176566199563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169,PodSandboxId:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723763176597813746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03,PodSandboxId:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723763176518253342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34f15484-8280-452f-bf2b-edf49c68d837 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	587d41a9670ab       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   1481e58423b29       hello-world-app-55bf9c44b4-bxccf
	d7aaca643dfd9       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 minutes ago       Running             headlamp                  0                   00fb62481fea5       headlamp-57fb76fcdb-lw8lr
	15ee7004b42fa       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   8f87c7ca4b22b       nginx
	14e94101b9079       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   48dbf8ac51d76       busybox
	438cc38e5eaec       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              patch                     0                   506127dafcf16       ingress-nginx-admission-patch-sps8k
	8320e2c19284e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   3834c595a9e7b       ingress-nginx-admission-create-7gqcb
	676bba24daba9       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   9325a6cd6715f       metrics-server-8988944d9-4mjqf
	e21f3d503431c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   34f0ae38ae641       storage-provisioner
	9cc0790a4dd8a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   328c613b14858       coredns-6f6b679f8f-mtm8z
	783cc30d2dd7d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   a6a31bca98aa3       kube-proxy-cg5sj
	9903cff7350a8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   09e7916c53c75       kube-scheduler-addons-517040
	3e11626924865       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   45fac553f83f4       kube-controller-manager-addons-517040
	02f3110373f4a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   9e717d3a8c79c       etcd-addons-517040
	0a061e44f1eb6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   4eb36ed0d90ef       kube-apiserver-addons-517040
	
	
	==> coredns [9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3] <==
	[INFO] 10.244.0.8:34968 - 45637 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156012s
	[INFO] 10.244.0.8:54248 - 30794 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125144s
	[INFO] 10.244.0.8:54248 - 15940 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068837s
	[INFO] 10.244.0.8:36658 - 37101 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072349s
	[INFO] 10.244.0.8:36658 - 52207 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050756s
	[INFO] 10.244.0.8:53860 - 30569 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141586s
	[INFO] 10.244.0.8:53860 - 47208 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086933s
	[INFO] 10.244.0.8:51072 - 49197 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000221128s
	[INFO] 10.244.0.8:51072 - 48944 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121617s
	[INFO] 10.244.0.8:58645 - 4884 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041041s
	[INFO] 10.244.0.8:58645 - 49162 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122933s
	[INFO] 10.244.0.8:46833 - 5940 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034973s
	[INFO] 10.244.0.8:46833 - 28982 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000127592s
	[INFO] 10.244.0.8:52695 - 57356 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000296s
	[INFO] 10.244.0.8:52695 - 15374 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121454s
	[INFO] 10.244.0.22:51658 - 56784 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000646471s
	[INFO] 10.244.0.22:55605 - 56632 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000374544s
	[INFO] 10.244.0.22:42138 - 60930 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100515s
	[INFO] 10.244.0.22:34345 - 30793 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000067425s
	[INFO] 10.244.0.22:39572 - 52722 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063156s
	[INFO] 10.244.0.22:51098 - 42197 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000049731s
	[INFO] 10.244.0.22:56710 - 6424 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000788608s
	[INFO] 10.244.0.22:51087 - 23101 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000560135s
	[INFO] 10.244.0.24:39868 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000291488s
	[INFO] 10.244.0.24:46146 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111083s
	
	
	==> describe nodes <==
	Name:               addons-517040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-517040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=addons-517040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T23_06_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-517040
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:06:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-517040
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:11:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:09:56 +0000   Thu, 15 Aug 2024 23:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:09:56 +0000   Thu, 15 Aug 2024 23:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:09:56 +0000   Thu, 15 Aug 2024 23:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:09:56 +0000   Thu, 15 Aug 2024 23:06:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    addons-517040
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 36e7519d3eca490ea4a9a1ff050606a7
	  System UUID:                36e7519d-3eca-490e-a4a9-a1ff050606a7
	  Boot ID:                    028cdf2c-7fe5-4c84-846c-ca06f7b1a090
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  default                     hello-world-app-55bf9c44b4-bxccf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  headlamp                    headlamp-57fb76fcdb-lw8lr                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 coredns-6f6b679f8f-mtm8z                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m5s
	  kube-system                 etcd-addons-517040                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m10s
	  kube-system                 kube-apiserver-addons-517040             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-controller-manager-addons-517040    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-proxy-cg5sj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-scheduler-addons-517040             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 metrics-server-8988944d9-4mjqf           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m59s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m3s   kube-proxy       
	  Normal  Starting                 5m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m10s  kubelet          Node addons-517040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s  kubelet          Node addons-517040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s  kubelet          Node addons-517040 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m9s   kubelet          Node addons-517040 status is now: NodeReady
	  Normal  RegisteredNode           5m6s   node-controller  Node addons-517040 event: Registered Node addons-517040 in Controller
	
	
	==> dmesg <==
	[  +5.004085] kauditd_printk_skb: 113 callbacks suppressed
	[  +8.784335] kauditd_printk_skb: 97 callbacks suppressed
	[ +11.605210] kauditd_printk_skb: 1 callbacks suppressed
	[Aug15 23:07] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.661497] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.778386] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.175528] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.095303] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.385077] kauditd_printk_skb: 71 callbacks suppressed
	[  +6.531494] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.307334] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.054538] kauditd_printk_skb: 55 callbacks suppressed
	[  +9.699595] kauditd_printk_skb: 8 callbacks suppressed
	[Aug15 23:08] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.064700] kauditd_printk_skb: 24 callbacks suppressed
	[ +13.176090] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.018946] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.638132] kauditd_printk_skb: 71 callbacks suppressed
	[  +5.177872] kauditd_printk_skb: 41 callbacks suppressed
	[ +11.139351] kauditd_printk_skb: 11 callbacks suppressed
	[Aug15 23:09] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.298176] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.680984] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.683028] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 23:11] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6] <==
	{"level":"warn","ts":"2024-08-15T23:07:44.778322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.946041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-15T23:07:44.778337Z","caller":"traceutil/trace.go:171","msg":"trace[1995727999] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1115; }","duration":"196.96329ms","start":"2024-08-15T23:07:44.581368Z","end":"2024-08-15T23:07:44.778332Z","steps":["trace[1995727999] 'agreement among raft nodes before linearized reading'  (duration: 196.936008ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T23:07:54.316199Z","caller":"traceutil/trace.go:171","msg":"trace[1322011620] linearizableReadLoop","detail":"{readStateIndex:1216; appliedIndex:1215; }","duration":"279.336868ms","start":"2024-08-15T23:07:54.036846Z","end":"2024-08-15T23:07:54.316183Z","steps":["trace[1322011620] 'read index received'  (duration: 279.159992ms)","trace[1322011620] 'applied index is now lower than readState.Index'  (duration: 176.026µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T23:07:54.316309Z","caller":"traceutil/trace.go:171","msg":"trace[1359792755] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"333.988133ms","start":"2024-08-15T23:07:53.982315Z","end":"2024-08-15T23:07:54.316303Z","steps":["trace[1359792755] 'process raft request'  (duration: 333.747168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:07:54.316416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:07:53.982296Z","time spent":"334.031256ms","remote":"127.0.0.1:41854","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":11025,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/addons-517040\" mod_revision:1078 > success:<request_put:<key:\"/registry/minions/addons-517040\" value_size:10986 >> failure:<request_range:<key:\"/registry/minions/addons-517040\" > >"}
	{"level":"warn","ts":"2024-08-15T23:07:54.316558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.710935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-15T23:07:54.316684Z","caller":"traceutil/trace.go:171","msg":"trace[1498191755] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1184; }","duration":"279.83278ms","start":"2024-08-15T23:07:54.036842Z","end":"2024-08-15T23:07:54.316675Z","steps":["trace[1498191755] 'agreement among raft nodes before linearized reading'  (duration: 279.659067ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:07:54.316927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.109938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-8988944d9-4mjqf.17ec098739334263\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-08-15T23:07:54.316965Z","caller":"traceutil/trace.go:171","msg":"trace[691200612] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-8988944d9-4mjqf.17ec098739334263; range_end:; response_count:1; response_revision:1184; }","duration":"263.150274ms","start":"2024-08-15T23:07:54.053807Z","end":"2024-08-15T23:07:54.316958Z","steps":["trace[691200612] 'agreement among raft nodes before linearized reading'  (duration: 263.069128ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:07:54.317080Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.810284ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T23:07:54.317113Z","caller":"traceutil/trace.go:171","msg":"trace[535716717] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1184; }","duration":"136.844217ms","start":"2024-08-15T23:07:54.180264Z","end":"2024-08-15T23:07:54.317108Z","steps":["trace[535716717] 'agreement among raft nodes before linearized reading'  (duration: 136.803494ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:07:54.317800Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.465505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T23:07:54.320238Z","caller":"traceutil/trace.go:171","msg":"trace[2102502889] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1184; }","duration":"169.900128ms","start":"2024-08-15T23:07:54.150323Z","end":"2024-08-15T23:07:54.320223Z","steps":["trace[2102502889] 'agreement among raft nodes before linearized reading'  (duration: 167.445136ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T23:08:34.696425Z","caller":"traceutil/trace.go:171","msg":"trace[433513394] transaction","detail":"{read_only:false; response_revision:1388; number_of_response:1; }","duration":"154.81689ms","start":"2024-08-15T23:08:34.541588Z","end":"2024-08-15T23:08:34.696404Z","steps":["trace[433513394] 'process raft request'  (duration: 154.506796ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:08:37.270866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.032721ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17902812752124459763 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/test-pvc\" mod_revision:1411 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/test-pvc\" value_size:997 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/test-pvc\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T23:08:37.270955Z","caller":"traceutil/trace.go:171","msg":"trace[803499199] linearizableReadLoop","detail":"{readStateIndex:1460; appliedIndex:1459; }","duration":"330.596502ms","start":"2024-08-15T23:08:36.940347Z","end":"2024-08-15T23:08:37.270944Z","steps":["trace[803499199] 'read index received'  (duration: 24.447528ms)","trace[803499199] 'applied index is now lower than readState.Index'  (duration: 306.148178ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T23:08:37.271154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.830774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T23:08:37.271179Z","caller":"traceutil/trace.go:171","msg":"trace[1311152874] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1415; }","duration":"330.860022ms","start":"2024-08-15T23:08:36.940311Z","end":"2024-08-15T23:08:37.271171Z","steps":["trace[1311152874] 'agreement among raft nodes before linearized reading'  (duration: 330.805732ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:08:37.271205Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:08:36.940268Z","time spent":"330.929918ms","remote":"127.0.0.1:54048","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-15T23:08:37.271283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.380406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1069"}
	{"level":"info","ts":"2024-08-15T23:08:37.271312Z","caller":"traceutil/trace.go:171","msg":"trace[1229371035] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"343.84108ms","start":"2024-08-15T23:08:36.927464Z","end":"2024-08-15T23:08:37.271306Z","steps":["trace[1229371035] 'process raft request'  (duration: 37.264457ms)","trace[1229371035] 'compare'  (duration: 305.62383ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T23:08:37.271318Z","caller":"traceutil/trace.go:171","msg":"trace[1728528703] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1416; }","duration":"265.421352ms","start":"2024-08-15T23:08:37.005889Z","end":"2024-08-15T23:08:37.271310Z","steps":["trace[1728528703] 'agreement among raft nodes before linearized reading'  (duration: 265.342615ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:08:37.271360Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:08:36.927436Z","time spent":"343.895015ms","remote":"127.0.0.1:41840","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1054,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/test-pvc\" mod_revision:1411 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/test-pvc\" value_size:997 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/test-pvc\" > >"}
	{"level":"info","ts":"2024-08-15T23:08:37.271740Z","caller":"traceutil/trace.go:171","msg":"trace[1436630512] transaction","detail":"{read_only:false; response_revision:1416; number_of_response:1; }","duration":"265.606163ms","start":"2024-08-15T23:08:37.006127Z","end":"2024-08-15T23:08:37.271733Z","steps":["trace[1436630512] 'process raft request'  (duration: 265.050164ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T23:09:01.720476Z","caller":"traceutil/trace.go:171","msg":"trace[1960795402] transaction","detail":"{read_only:false; response_revision:1629; number_of_response:1; }","duration":"141.140362ms","start":"2024-08-15T23:09:01.576454Z","end":"2024-08-15T23:09:01.717595Z","steps":["trace[1960795402] 'process raft request'  (duration: 140.77956ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:11:33 up 5 min,  0 users,  load average: 0.38, 1.09, 0.61
	Linux addons-517040 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0815 23:08:14.090870       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.106.174:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.106.174:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.106.174:443: connect: connection refused" logger="UnhandledError"
	E0815 23:08:14.098850       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.106.174:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.106.174:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.106.174:443: connect: connection refused" logger="UnhandledError"
	I0815 23:08:14.162159       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0815 23:08:26.103966       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 23:08:27.146246       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0815 23:08:39.804435       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.72:8443->10.244.0.26:37712: read: connection reset by peer
	I0815 23:08:44.929311       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 23:08:59.420323       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 23:08:59.640432       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.216.48"}
	E0815 23:09:03.503871       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0815 23:09:17.923924       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 23:09:17.924089       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 23:09:17.945790       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 23:09:17.945848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 23:09:17.988450       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 23:09:17.988554       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 23:09:18.011455       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 23:09:18.011508       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 23:09:19.011814       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0815 23:09:19.108915       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0815 23:09:19.110754       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0815 23:09:25.836671       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.189.90"}
	I0815 23:11:22.342434       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.126.185"}
	
	
	==> kube-controller-manager [3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169] <==
	W0815 23:10:23.944783       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:10:23.944926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:10:41.570321       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:10:41.570523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:10:43.526851       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:10:43.526969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:10:47.980049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:10:47.980171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:11:14.231333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:11:14.231477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 23:11:22.168881       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.638018ms"
	I0815 23:11:22.196847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="27.853346ms"
	I0815 23:11:22.210466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="13.525438ms"
	I0815 23:11:22.210679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="64.907µs"
	W0815 23:11:22.433887       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:11:22.434026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 23:11:24.154855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.280922ms"
	I0815 23:11:24.160396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.169µs"
	I0815 23:11:24.794397       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0815 23:11:24.803000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="3.573µs"
	I0815 23:11:24.811041       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0815 23:11:26.483098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:11:26.483250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:11:28.984263       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:11:28.984346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:06:28.863372       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:06:28.885863       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.72"]
	E0815 23:06:28.885965       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:06:28.990396       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:06:28.990423       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:06:28.990450       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:06:28.997917       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:06:28.998279       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:06:28.998294       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:06:28.999719       1 config.go:197] "Starting service config controller"
	I0815 23:06:28.999745       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:06:28.999767       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:06:28.999771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:06:29.000305       1 config.go:326] "Starting node config controller"
	I0815 23:06:29.000313       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:06:29.101242       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:06:29.101285       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:06:29.101369       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918] <==
	W0815 23:06:19.320803       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 23:06:19.320841       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 23:06:20.164865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 23:06:20.164974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.192809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 23:06:20.192865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.262127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 23:06:20.262562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.301071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 23:06:20.301184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.408576       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 23:06:20.408689       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 23:06:20.417271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 23:06:20.417348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.422576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 23:06:20.422668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.437531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 23:06:20.438687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.587120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 23:06:20.587765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.613189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 23:06:20.613411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.631953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 23:06:20.632176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0815 23:06:23.213720       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 23:11:23 addons-517040 kubelet[1239]: I0815 23:11:23.467991    1239 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5hq5\" (UniqueName: \"kubernetes.io/projected/53e62c76-994b-4d37-9ac3-fada87d1d0c4-kube-api-access-b5hq5\") pod \"53e62c76-994b-4d37-9ac3-fada87d1d0c4\" (UID: \"53e62c76-994b-4d37-9ac3-fada87d1d0c4\") "
	Aug 15 23:11:23 addons-517040 kubelet[1239]: I0815 23:11:23.470143    1239 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e62c76-994b-4d37-9ac3-fada87d1d0c4-kube-api-access-b5hq5" (OuterVolumeSpecName: "kube-api-access-b5hq5") pod "53e62c76-994b-4d37-9ac3-fada87d1d0c4" (UID: "53e62c76-994b-4d37-9ac3-fada87d1d0c4"). InnerVolumeSpecName "kube-api-access-b5hq5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 23:11:23 addons-517040 kubelet[1239]: I0815 23:11:23.569056    1239 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b5hq5\" (UniqueName: \"kubernetes.io/projected/53e62c76-994b-4d37-9ac3-fada87d1d0c4-kube-api-access-b5hq5\") on node \"addons-517040\" DevicePath \"\""
	Aug 15 23:11:24 addons-517040 kubelet[1239]: I0815 23:11:24.118870    1239 scope.go:117] "RemoveContainer" containerID="de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc"
	Aug 15 23:11:24 addons-517040 kubelet[1239]: I0815 23:11:24.160051    1239 scope.go:117] "RemoveContainer" containerID="de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc"
	Aug 15 23:11:24 addons-517040 kubelet[1239]: E0815 23:11:24.163495    1239 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc\": container with ID starting with de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc not found: ID does not exist" containerID="de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc"
	Aug 15 23:11:24 addons-517040 kubelet[1239]: I0815 23:11:24.163533    1239 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc"} err="failed to get container status \"de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc\": rpc error: code = NotFound desc = could not find container \"de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc\": container with ID starting with de9d7a3702efeb2894d584284f3a9eb4d69ea1c17a26799e584d542cfb4b1edc not found: ID does not exist"
	Aug 15 23:11:24 addons-517040 kubelet[1239]: I0815 23:11:24.181927    1239 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-bxccf" podStartSLOduration=1.5458029930000001 podStartE2EDuration="2.181895904s" podCreationTimestamp="2024-08-15 23:11:22 +0000 UTC" firstStartedPulling="2024-08-15 23:11:23.092478868 +0000 UTC m=+301.006999574" lastFinishedPulling="2024-08-15 23:11:23.728571788 +0000 UTC m=+301.643092485" observedRunningTime="2024-08-15 23:11:24.146158855 +0000 UTC m=+302.060679554" watchObservedRunningTime="2024-08-15 23:11:24.181895904 +0000 UTC m=+302.096416619"
	Aug 15 23:11:24 addons-517040 kubelet[1239]: I0815 23:11:24.228054    1239 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e62c76-994b-4d37-9ac3-fada87d1d0c4" path="/var/lib/kubelet/pods/53e62c76-994b-4d37-9ac3-fada87d1d0c4/volumes"
	Aug 15 23:11:26 addons-517040 kubelet[1239]: I0815 23:11:26.227558    1239 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3736048e-9b95-49aa-b71c-cffc14525fa8" path="/var/lib/kubelet/pods/3736048e-9b95-49aa-b71c-cffc14525fa8/volumes"
	Aug 15 23:11:26 addons-517040 kubelet[1239]: I0815 23:11:26.228475    1239 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d522e4-782c-4f8c-bbc1-012e22ebc350" path="/var/lib/kubelet/pods/98d522e4-782c-4f8c-bbc1-012e22ebc350/volumes"
	Aug 15 23:11:27 addons-517040 kubelet[1239]: I0815 23:11:27.223234    1239 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.102899    1239 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksg44\" (UniqueName: \"kubernetes.io/projected/de6122f1-cd73-4873-b7f2-78341c9cf122-kube-api-access-ksg44\") pod \"de6122f1-cd73-4873-b7f2-78341c9cf122\" (UID: \"de6122f1-cd73-4873-b7f2-78341c9cf122\") "
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.102947    1239 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de6122f1-cd73-4873-b7f2-78341c9cf122-webhook-cert\") pod \"de6122f1-cd73-4873-b7f2-78341c9cf122\" (UID: \"de6122f1-cd73-4873-b7f2-78341c9cf122\") "
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.105980    1239 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6122f1-cd73-4873-b7f2-78341c9cf122-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "de6122f1-cd73-4873-b7f2-78341c9cf122" (UID: "de6122f1-cd73-4873-b7f2-78341c9cf122"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.106601    1239 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6122f1-cd73-4873-b7f2-78341c9cf122-kube-api-access-ksg44" (OuterVolumeSpecName: "kube-api-access-ksg44") pod "de6122f1-cd73-4873-b7f2-78341c9cf122" (UID: "de6122f1-cd73-4873-b7f2-78341c9cf122"). InnerVolumeSpecName "kube-api-access-ksg44". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.149048    1239 scope.go:117] "RemoveContainer" containerID="8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d"
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.169774    1239 scope.go:117] "RemoveContainer" containerID="8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d"
	Aug 15 23:11:28 addons-517040 kubelet[1239]: E0815 23:11:28.170448    1239 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d\": container with ID starting with 8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d not found: ID does not exist" containerID="8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d"
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.170491    1239 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d"} err="failed to get container status \"8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d\": rpc error: code = NotFound desc = could not find container \"8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d\": container with ID starting with 8bdf2409bec02dee5f329c91376af2cd17ed4419241dc2c5ff2deb6bd388f60d not found: ID does not exist"
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.203610    1239 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ksg44\" (UniqueName: \"kubernetes.io/projected/de6122f1-cd73-4873-b7f2-78341c9cf122-kube-api-access-ksg44\") on node \"addons-517040\" DevicePath \"\""
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.203724    1239 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de6122f1-cd73-4873-b7f2-78341c9cf122-webhook-cert\") on node \"addons-517040\" DevicePath \"\""
	Aug 15 23:11:28 addons-517040 kubelet[1239]: I0815 23:11:28.226710    1239 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6122f1-cd73-4873-b7f2-78341c9cf122" path="/var/lib/kubelet/pods/de6122f1-cd73-4873-b7f2-78341c9cf122/volumes"
	Aug 15 23:11:32 addons-517040 kubelet[1239]: E0815 23:11:32.548196    1239 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763492547767923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:11:32 addons-517040 kubelet[1239]: E0815 23:11:32.548238    1239 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763492547767923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08] <==
	I0815 23:06:35.923009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 23:06:36.046981       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 23:06:36.047119       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 23:06:36.307905       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 23:06:36.308156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-517040_4256da64-e71c-4807-97e1-ceb3c1645eca!
	I0815 23:06:36.309322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8ed2cda5-fb42-41c6-9a9f-6f4a4762f63e", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-517040_4256da64-e71c-4807-97e1-ceb3c1645eca became leader
	I0815 23:06:36.408319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-517040_4256da64-e71c-4807-97e1-ceb3c1645eca!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-517040 -n addons-517040
helpers_test.go:261: (dbg) Run:  kubectl --context addons-517040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (306.23s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.125069ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-4mjqf" [f4e01981-c592-4b6b-a285-4046cf8c68c0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003298992s
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (66.767721ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-517040, age: 2m2.669503876s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (70.055323ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 2m1.263479232s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (77.414802ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 2m6.107383447s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (92.944715ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 2m13.010358432s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (66.347068ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 2m22.515553131s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (65.383323ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 2m42.661899415s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (62.905228ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 3m9.483945743s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (62.834602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 3m31.788334457s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (66.236823ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 4m3.663371618s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (61.769576ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 5m20.711347852s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (63.486578ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 6m12.472669461s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-517040 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-517040 top pods -n kube-system: exit status 1 (60.921422ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mtm8z, age: 6m56.155247712s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-517040 -n addons-517040
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-517040 logs -n 25: (1.302182613s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-195850                                                                     | download-only-195850 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-071536 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |                     |
	|         | binary-mirror-071536                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39393                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-071536                                                                     | binary-mirror-071536 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
	| addons  | disable dashboard -p                                                                        | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |                     |
	|         | addons-517040                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |                     |
	|         | addons-517040                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-517040 --wait=true                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:07 UTC | 15 Aug 24 23:08 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | addons-517040                                                                               |                      |         |         |                     |                     |
	| ip      | addons-517040 ip                                                                            | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-517040 ssh cat                                                                       | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | /opt/local-path-provisioner/pvc-e577ed7e-383c-4543-b504-630414b64b8d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:09 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:08 UTC | 15 Aug 24 23:08 UTC |
	|         | -p addons-517040                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-517040 ssh curl -s                                                                   | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-517040 addons                                                                        | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-517040 addons                                                                        | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | addons-517040                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | -p addons-517040                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:09 UTC | 15 Aug 24 23:09 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-517040 ip                                                                            | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:11 UTC | 15 Aug 24 23:11 UTC |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:11 UTC | 15 Aug 24 23:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-517040 addons disable                                                                | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:11 UTC | 15 Aug 24 23:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-517040 addons                                                                        | addons-517040        | jenkins | v1.33.1 | 15 Aug 24 23:13 UTC | 15 Aug 24 23:13 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:05:40
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:05:40.726703   20724 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:05:40.726808   20724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:05:40.726837   20724 out.go:358] Setting ErrFile to fd 2...
	I0815 23:05:40.726843   20724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:05:40.727048   20724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:05:40.727631   20724 out.go:352] Setting JSON to false
	I0815 23:05:40.728417   20724 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2841,"bootTime":1723760300,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:05:40.728471   20724 start.go:139] virtualization: kvm guest
	I0815 23:05:40.730343   20724 out.go:177] * [addons-517040] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:05:40.731488   20724 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:05:40.731489   20724 notify.go:220] Checking for updates...
	I0815 23:05:40.733942   20724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:05:40.735214   20724 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:05:40.736269   20724 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:05:40.737460   20724 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:05:40.738705   20724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:05:40.740038   20724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:05:40.771541   20724 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 23:05:40.772872   20724 start.go:297] selected driver: kvm2
	I0815 23:05:40.772898   20724 start.go:901] validating driver "kvm2" against <nil>
	I0815 23:05:40.772909   20724 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:05:40.773596   20724 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:05:40.773673   20724 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:05:40.788752   20724 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:05:40.788797   20724 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 23:05:40.789019   20724 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:05:40.789078   20724 cni.go:84] Creating CNI manager for ""
	I0815 23:05:40.789091   20724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 23:05:40.789098   20724 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 23:05:40.789145   20724 start.go:340] cluster config:
	{Name:addons-517040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:05:40.789237   20724 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:05:40.791212   20724 out.go:177] * Starting "addons-517040" primary control-plane node in "addons-517040" cluster
	I0815 23:05:40.792445   20724 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:05:40.792483   20724 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:05:40.792493   20724 cache.go:56] Caching tarball of preloaded images
	I0815 23:05:40.792581   20724 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:05:40.792594   20724 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:05:40.792886   20724 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/config.json ...
	I0815 23:05:40.792910   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/config.json: {Name:mkc068a6cb6d319d2d53c22ac1e2ab4c83706ce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:05:40.793075   20724 start.go:360] acquireMachinesLock for addons-517040: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:05:40.793137   20724 start.go:364] duration metric: took 42.072µs to acquireMachinesLock for "addons-517040"
	I0815 23:05:40.793161   20724 start.go:93] Provisioning new machine with config: &{Name:addons-517040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:05:40.793221   20724 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 23:05:40.794853   20724 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0815 23:05:40.794983   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:05:40.795023   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:05:40.809129   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0815 23:05:40.809515   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:05:40.810087   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:05:40.810108   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:05:40.810436   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:05:40.810622   20724 main.go:141] libmachine: (addons-517040) Calling .GetMachineName
	I0815 23:05:40.810749   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:05:40.810898   20724 start.go:159] libmachine.API.Create for "addons-517040" (driver="kvm2")
	I0815 23:05:40.810923   20724 client.go:168] LocalClient.Create starting
	I0815 23:05:40.810965   20724 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0815 23:05:40.936183   20724 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0815 23:05:41.109642   20724 main.go:141] libmachine: Running pre-create checks...
	I0815 23:05:41.109670   20724 main.go:141] libmachine: (addons-517040) Calling .PreCreateCheck
	I0815 23:05:41.110190   20724 main.go:141] libmachine: (addons-517040) Calling .GetConfigRaw
	I0815 23:05:41.110600   20724 main.go:141] libmachine: Creating machine...
	I0815 23:05:41.110614   20724 main.go:141] libmachine: (addons-517040) Calling .Create
	I0815 23:05:41.110753   20724 main.go:141] libmachine: (addons-517040) Creating KVM machine...
	I0815 23:05:41.112029   20724 main.go:141] libmachine: (addons-517040) DBG | found existing default KVM network
	I0815 23:05:41.112710   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.112560   20746 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0815 23:05:41.112770   20724 main.go:141] libmachine: (addons-517040) DBG | created network xml: 
	I0815 23:05:41.112795   20724 main.go:141] libmachine: (addons-517040) DBG | <network>
	I0815 23:05:41.112807   20724 main.go:141] libmachine: (addons-517040) DBG |   <name>mk-addons-517040</name>
	I0815 23:05:41.112819   20724 main.go:141] libmachine: (addons-517040) DBG |   <dns enable='no'/>
	I0815 23:05:41.112830   20724 main.go:141] libmachine: (addons-517040) DBG |   
	I0815 23:05:41.112844   20724 main.go:141] libmachine: (addons-517040) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 23:05:41.112858   20724 main.go:141] libmachine: (addons-517040) DBG |     <dhcp>
	I0815 23:05:41.112870   20724 main.go:141] libmachine: (addons-517040) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 23:05:41.112894   20724 main.go:141] libmachine: (addons-517040) DBG |     </dhcp>
	I0815 23:05:41.112916   20724 main.go:141] libmachine: (addons-517040) DBG |   </ip>
	I0815 23:05:41.112971   20724 main.go:141] libmachine: (addons-517040) DBG |   
	I0815 23:05:41.113006   20724 main.go:141] libmachine: (addons-517040) DBG | </network>
	I0815 23:05:41.113020   20724 main.go:141] libmachine: (addons-517040) DBG | 
	I0815 23:05:41.118098   20724 main.go:141] libmachine: (addons-517040) DBG | trying to create private KVM network mk-addons-517040 192.168.39.0/24...
	I0815 23:05:41.180819   20724 main.go:141] libmachine: (addons-517040) DBG | private KVM network mk-addons-517040 192.168.39.0/24 created
	I0815 23:05:41.180849   20724 main.go:141] libmachine: (addons-517040) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040 ...
	I0815 23:05:41.180877   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.180788   20746 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:05:41.180899   20724 main.go:141] libmachine: (addons-517040) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:05:41.180981   20724 main.go:141] libmachine: (addons-517040) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 23:05:41.428023   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.427909   20746 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa...
	I0815 23:05:41.521941   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.521785   20746 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/addons-517040.rawdisk...
	I0815 23:05:41.521971   20724 main.go:141] libmachine: (addons-517040) DBG | Writing magic tar header
	I0815 23:05:41.521986   20724 main.go:141] libmachine: (addons-517040) DBG | Writing SSH key tar header
	I0815 23:05:41.521997   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:41.521931   20746 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040 ...
	I0815 23:05:41.522077   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040
	I0815 23:05:41.522099   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0815 23:05:41.522111   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040 (perms=drwx------)
	I0815 23:05:41.522127   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0815 23:05:41.522139   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0815 23:05:41.522155   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:05:41.522166   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0815 23:05:41.522183   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 23:05:41.522196   20724 main.go:141] libmachine: (addons-517040) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 23:05:41.522214   20724 main.go:141] libmachine: (addons-517040) Creating domain...
	I0815 23:05:41.522227   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0815 23:05:41.522253   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 23:05:41.522270   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home/jenkins
	I0815 23:05:41.522280   20724 main.go:141] libmachine: (addons-517040) DBG | Checking permissions on dir: /home
	I0815 23:05:41.522291   20724 main.go:141] libmachine: (addons-517040) DBG | Skipping /home - not owner
	I0815 23:05:41.523151   20724 main.go:141] libmachine: (addons-517040) define libvirt domain using xml: 
	I0815 23:05:41.523186   20724 main.go:141] libmachine: (addons-517040) <domain type='kvm'>
	I0815 23:05:41.523196   20724 main.go:141] libmachine: (addons-517040)   <name>addons-517040</name>
	I0815 23:05:41.523203   20724 main.go:141] libmachine: (addons-517040)   <memory unit='MiB'>4000</memory>
	I0815 23:05:41.523232   20724 main.go:141] libmachine: (addons-517040)   <vcpu>2</vcpu>
	I0815 23:05:41.523252   20724 main.go:141] libmachine: (addons-517040)   <features>
	I0815 23:05:41.523263   20724 main.go:141] libmachine: (addons-517040)     <acpi/>
	I0815 23:05:41.523272   20724 main.go:141] libmachine: (addons-517040)     <apic/>
	I0815 23:05:41.523279   20724 main.go:141] libmachine: (addons-517040)     <pae/>
	I0815 23:05:41.523286   20724 main.go:141] libmachine: (addons-517040)     
	I0815 23:05:41.523291   20724 main.go:141] libmachine: (addons-517040)   </features>
	I0815 23:05:41.523296   20724 main.go:141] libmachine: (addons-517040)   <cpu mode='host-passthrough'>
	I0815 23:05:41.523304   20724 main.go:141] libmachine: (addons-517040)   
	I0815 23:05:41.523311   20724 main.go:141] libmachine: (addons-517040)   </cpu>
	I0815 23:05:41.523323   20724 main.go:141] libmachine: (addons-517040)   <os>
	I0815 23:05:41.523335   20724 main.go:141] libmachine: (addons-517040)     <type>hvm</type>
	I0815 23:05:41.523348   20724 main.go:141] libmachine: (addons-517040)     <boot dev='cdrom'/>
	I0815 23:05:41.523358   20724 main.go:141] libmachine: (addons-517040)     <boot dev='hd'/>
	I0815 23:05:41.523378   20724 main.go:141] libmachine: (addons-517040)     <bootmenu enable='no'/>
	I0815 23:05:41.523385   20724 main.go:141] libmachine: (addons-517040)   </os>
	I0815 23:05:41.523391   20724 main.go:141] libmachine: (addons-517040)   <devices>
	I0815 23:05:41.523399   20724 main.go:141] libmachine: (addons-517040)     <disk type='file' device='cdrom'>
	I0815 23:05:41.523421   20724 main.go:141] libmachine: (addons-517040)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/boot2docker.iso'/>
	I0815 23:05:41.523435   20724 main.go:141] libmachine: (addons-517040)       <target dev='hdc' bus='scsi'/>
	I0815 23:05:41.523448   20724 main.go:141] libmachine: (addons-517040)       <readonly/>
	I0815 23:05:41.523457   20724 main.go:141] libmachine: (addons-517040)     </disk>
	I0815 23:05:41.523471   20724 main.go:141] libmachine: (addons-517040)     <disk type='file' device='disk'>
	I0815 23:05:41.523484   20724 main.go:141] libmachine: (addons-517040)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 23:05:41.523498   20724 main.go:141] libmachine: (addons-517040)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/addons-517040.rawdisk'/>
	I0815 23:05:41.523513   20724 main.go:141] libmachine: (addons-517040)       <target dev='hda' bus='virtio'/>
	I0815 23:05:41.523525   20724 main.go:141] libmachine: (addons-517040)     </disk>
	I0815 23:05:41.523536   20724 main.go:141] libmachine: (addons-517040)     <interface type='network'>
	I0815 23:05:41.523548   20724 main.go:141] libmachine: (addons-517040)       <source network='mk-addons-517040'/>
	I0815 23:05:41.523559   20724 main.go:141] libmachine: (addons-517040)       <model type='virtio'/>
	I0815 23:05:41.523568   20724 main.go:141] libmachine: (addons-517040)     </interface>
	I0815 23:05:41.523580   20724 main.go:141] libmachine: (addons-517040)     <interface type='network'>
	I0815 23:05:41.523593   20724 main.go:141] libmachine: (addons-517040)       <source network='default'/>
	I0815 23:05:41.523603   20724 main.go:141] libmachine: (addons-517040)       <model type='virtio'/>
	I0815 23:05:41.523612   20724 main.go:141] libmachine: (addons-517040)     </interface>
	I0815 23:05:41.523622   20724 main.go:141] libmachine: (addons-517040)     <serial type='pty'>
	I0815 23:05:41.523635   20724 main.go:141] libmachine: (addons-517040)       <target port='0'/>
	I0815 23:05:41.523645   20724 main.go:141] libmachine: (addons-517040)     </serial>
	I0815 23:05:41.523657   20724 main.go:141] libmachine: (addons-517040)     <console type='pty'>
	I0815 23:05:41.523676   20724 main.go:141] libmachine: (addons-517040)       <target type='serial' port='0'/>
	I0815 23:05:41.523688   20724 main.go:141] libmachine: (addons-517040)     </console>
	I0815 23:05:41.523698   20724 main.go:141] libmachine: (addons-517040)     <rng model='virtio'>
	I0815 23:05:41.523707   20724 main.go:141] libmachine: (addons-517040)       <backend model='random'>/dev/random</backend>
	I0815 23:05:41.523717   20724 main.go:141] libmachine: (addons-517040)     </rng>
	I0815 23:05:41.523728   20724 main.go:141] libmachine: (addons-517040)     
	I0815 23:05:41.523737   20724 main.go:141] libmachine: (addons-517040)     
	I0815 23:05:41.523758   20724 main.go:141] libmachine: (addons-517040)   </devices>
	I0815 23:05:41.523772   20724 main.go:141] libmachine: (addons-517040) </domain>
	I0815 23:05:41.523795   20724 main.go:141] libmachine: (addons-517040) 
	I0815 23:05:41.530244   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:87:c9:7f in network default
	I0815 23:05:41.530913   20724 main.go:141] libmachine: (addons-517040) Ensuring networks are active...
	I0815 23:05:41.530939   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:41.531486   20724 main.go:141] libmachine: (addons-517040) Ensuring network default is active
	I0815 23:05:41.531773   20724 main.go:141] libmachine: (addons-517040) Ensuring network mk-addons-517040 is active
	I0815 23:05:41.532441   20724 main.go:141] libmachine: (addons-517040) Getting domain xml...
	I0815 23:05:41.533123   20724 main.go:141] libmachine: (addons-517040) Creating domain...
	I0815 23:05:42.934407   20724 main.go:141] libmachine: (addons-517040) Waiting to get IP...
	I0815 23:05:42.935349   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:42.935668   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:42.935706   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:42.935673   20746 retry.go:31] will retry after 237.590583ms: waiting for machine to come up
	I0815 23:05:43.175044   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:43.175445   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:43.175470   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:43.175402   20746 retry.go:31] will retry after 264.338969ms: waiting for machine to come up
	I0815 23:05:43.441710   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:43.442105   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:43.442132   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:43.442062   20746 retry.go:31] will retry after 302.741357ms: waiting for machine to come up
	I0815 23:05:43.746671   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:43.747144   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:43.747166   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:43.747122   20746 retry.go:31] will retry after 440.364326ms: waiting for machine to come up
	I0815 23:05:44.188535   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:44.188961   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:44.188985   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:44.188907   20746 retry.go:31] will retry after 630.018255ms: waiting for machine to come up
	I0815 23:05:44.820607   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:44.821012   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:44.821040   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:44.820964   20746 retry.go:31] will retry after 605.591929ms: waiting for machine to come up
	I0815 23:05:45.427623   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:45.427941   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:45.427971   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:45.427893   20746 retry.go:31] will retry after 754.34659ms: waiting for machine to come up
	I0815 23:05:46.183452   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:46.183737   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:46.183768   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:46.183723   20746 retry.go:31] will retry after 981.167966ms: waiting for machine to come up
	I0815 23:05:47.166157   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:47.166527   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:47.166553   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:47.166480   20746 retry.go:31] will retry after 1.531776262s: waiting for machine to come up
	I0815 23:05:48.699382   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:48.699721   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:48.699759   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:48.699672   20746 retry.go:31] will retry after 1.472107504s: waiting for machine to come up
	I0815 23:05:50.174440   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:50.174768   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:50.174794   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:50.174723   20746 retry.go:31] will retry after 1.871938627s: waiting for machine to come up
	I0815 23:05:52.048950   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:52.049332   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:52.049360   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:52.049310   20746 retry.go:31] will retry after 3.372664612s: waiting for machine to come up
	I0815 23:05:55.425961   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:55.426376   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:55.426399   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:55.426326   20746 retry.go:31] will retry after 2.813207941s: waiting for machine to come up
	I0815 23:05:58.242815   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:05:58.243240   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find current IP address of domain addons-517040 in network mk-addons-517040
	I0815 23:05:58.243264   20724 main.go:141] libmachine: (addons-517040) DBG | I0815 23:05:58.243203   20746 retry.go:31] will retry after 5.142110925s: waiting for machine to come up
	I0815 23:06:03.388238   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.388671   20724 main.go:141] libmachine: (addons-517040) Found IP for machine: 192.168.39.72
	I0815 23:06:03.388694   20724 main.go:141] libmachine: (addons-517040) Reserving static IP address...
	I0815 23:06:03.388709   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has current primary IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.389029   20724 main.go:141] libmachine: (addons-517040) DBG | unable to find host DHCP lease matching {name: "addons-517040", mac: "52:54:00:df:98:d5", ip: "192.168.39.72"} in network mk-addons-517040
	I0815 23:06:03.462668   20724 main.go:141] libmachine: (addons-517040) DBG | Getting to WaitForSSH function...
	I0815 23:06:03.462696   20724 main.go:141] libmachine: (addons-517040) Reserved static IP address: 192.168.39.72
	I0815 23:06:03.462709   20724 main.go:141] libmachine: (addons-517040) Waiting for SSH to be available...
	I0815 23:06:03.465297   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.465742   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.465769   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.465946   20724 main.go:141] libmachine: (addons-517040) DBG | Using SSH client type: external
	I0815 23:06:03.465982   20724 main.go:141] libmachine: (addons-517040) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa (-rw-------)
	I0815 23:06:03.466018   20724 main.go:141] libmachine: (addons-517040) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.72 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:06:03.466032   20724 main.go:141] libmachine: (addons-517040) DBG | About to run SSH command:
	I0815 23:06:03.466047   20724 main.go:141] libmachine: (addons-517040) DBG | exit 0
	I0815 23:06:03.598221   20724 main.go:141] libmachine: (addons-517040) DBG | SSH cmd err, output: <nil>: 
	I0815 23:06:03.598521   20724 main.go:141] libmachine: (addons-517040) KVM machine creation complete!
	I0815 23:06:03.598864   20724 main.go:141] libmachine: (addons-517040) Calling .GetConfigRaw
	I0815 23:06:03.599370   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:03.599555   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:03.599717   20724 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 23:06:03.599732   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:03.600877   20724 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 23:06:03.600890   20724 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 23:06:03.600895   20724 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 23:06:03.600901   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:03.604210   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.604599   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.604639   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.604764   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:03.604951   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.605086   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.605229   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:03.605369   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:03.605561   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:03.605578   20724 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 23:06:03.705240   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:06:03.705259   20724 main.go:141] libmachine: Detecting the provisioner...
	I0815 23:06:03.705266   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:03.708105   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.708447   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.708482   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.708667   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:03.708863   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.709023   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.709162   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:03.709288   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:03.709451   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:03.709461   20724 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 23:06:03.810794   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 23:06:03.810865   20724 main.go:141] libmachine: found compatible host: buildroot
	I0815 23:06:03.810871   20724 main.go:141] libmachine: Provisioning with buildroot...
	I0815 23:06:03.810878   20724 main.go:141] libmachine: (addons-517040) Calling .GetMachineName
	I0815 23:06:03.811116   20724 buildroot.go:166] provisioning hostname "addons-517040"
	I0815 23:06:03.811138   20724 main.go:141] libmachine: (addons-517040) Calling .GetMachineName
	I0815 23:06:03.811326   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:03.813732   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.814132   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.814163   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.814307   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:03.814508   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.814722   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.814889   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:03.815070   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:03.815301   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:03.815318   20724 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-517040 && echo "addons-517040" | sudo tee /etc/hostname
	I0815 23:06:03.929270   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-517040
	
	I0815 23:06:03.929298   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:03.932137   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.932531   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:03.932560   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:03.932716   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:03.932924   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.933068   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:03.933212   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:03.933385   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:03.933564   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:03.933589   20724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-517040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-517040/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-517040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:06:04.043659   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:06:04.043691   20724 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:06:04.043725   20724 buildroot.go:174] setting up certificates
	I0815 23:06:04.043756   20724 provision.go:84] configureAuth start
	I0815 23:06:04.043769   20724 main.go:141] libmachine: (addons-517040) Calling .GetMachineName
	I0815 23:06:04.044106   20724 main.go:141] libmachine: (addons-517040) Calling .GetIP
	I0815 23:06:04.046931   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.047318   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.047345   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.047489   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.049926   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.050308   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.050345   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.050442   20724 provision.go:143] copyHostCerts
	I0815 23:06:04.050515   20724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:06:04.050634   20724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:06:04.050727   20724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:06:04.050783   20724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.addons-517040 san=[127.0.0.1 192.168.39.72 addons-517040 localhost minikube]
	I0815 23:06:04.369628   20724 provision.go:177] copyRemoteCerts
	I0815 23:06:04.369681   20724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:06:04.369708   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.372443   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.372919   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.372948   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.373103   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:04.373299   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.373426   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:04.373563   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:04.452210   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 23:06:04.477044   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 23:06:04.505359   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:06:04.531096   20724 provision.go:87] duration metric: took 487.322626ms to configureAuth
	I0815 23:06:04.531133   20724 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:06:04.531322   20724 config.go:182] Loaded profile config "addons-517040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:06:04.531392   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.534467   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.534693   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.534719   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.534897   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:04.535126   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.535306   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.535462   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:04.535626   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:04.535828   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:04.535850   20724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:06:04.793582   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:06:04.793609   20724 main.go:141] libmachine: Checking connection to Docker...
	I0815 23:06:04.793617   20724 main.go:141] libmachine: (addons-517040) Calling .GetURL
	I0815 23:06:04.794933   20724 main.go:141] libmachine: (addons-517040) DBG | Using libvirt version 6000000
	I0815 23:06:04.797318   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.797703   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.797729   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.797936   20724 main.go:141] libmachine: Docker is up and running!
	I0815 23:06:04.797951   20724 main.go:141] libmachine: Reticulating splines...
	I0815 23:06:04.797959   20724 client.go:171] duration metric: took 23.987028884s to LocalClient.Create
	I0815 23:06:04.797995   20724 start.go:167] duration metric: took 23.987088847s to libmachine.API.Create "addons-517040"
	I0815 23:06:04.798016   20724 start.go:293] postStartSetup for "addons-517040" (driver="kvm2")
	I0815 23:06:04.798030   20724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:06:04.798055   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.798317   20724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:06:04.798340   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.800645   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.801049   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.801074   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.801191   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:04.801358   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.801477   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:04.801577   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:04.881171   20724 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:06:04.885602   20724 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:06:04.885640   20724 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:06:04.885718   20724 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:06:04.885748   20724 start.go:296] duration metric: took 87.724897ms for postStartSetup
	I0815 23:06:04.885784   20724 main.go:141] libmachine: (addons-517040) Calling .GetConfigRaw
	I0815 23:06:04.886343   20724 main.go:141] libmachine: (addons-517040) Calling .GetIP
	I0815 23:06:04.888928   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.889436   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.889468   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.889749   20724 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/config.json ...
	I0815 23:06:04.889976   20724 start.go:128] duration metric: took 24.096745212s to createHost
	I0815 23:06:04.890001   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.892676   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.893009   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.893044   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.893171   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:04.893468   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.893644   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:04.893789   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:04.893995   20724 main.go:141] libmachine: Using SSH client type: native
	I0815 23:06:04.894198   20724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.72 22 <nil> <nil>}
	I0815 23:06:04.894211   20724 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:06:04.994780   20724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723763164.969432037
	
	I0815 23:06:04.994806   20724 fix.go:216] guest clock: 1723763164.969432037
	I0815 23:06:04.994816   20724 fix.go:229] Guest: 2024-08-15 23:06:04.969432037 +0000 UTC Remote: 2024-08-15 23:06:04.88999088 +0000 UTC m=+24.196938035 (delta=79.441157ms)
	I0815 23:06:04.994847   20724 fix.go:200] guest clock delta is within tolerance: 79.441157ms
	I0815 23:06:04.994854   20724 start.go:83] releasing machines lock for "addons-517040", held for 24.201703154s
	I0815 23:06:04.994882   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.995181   20724 main.go:141] libmachine: (addons-517040) Calling .GetIP
	I0815 23:06:04.998178   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.998557   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:04.998586   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:04.998753   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.999223   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.999381   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:04.999447   20724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:06:04.999495   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:04.999555   20724 ssh_runner.go:195] Run: cat /version.json
	I0815 23:06:04.999579   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:05.002277   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:05.002336   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:05.002605   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:05.002630   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:05.002729   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:05.002759   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:05.002775   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:05.002973   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:05.002984   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:05.003129   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:05.003130   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:05.003296   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:05.003300   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:05.003438   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:05.079137   20724 ssh_runner.go:195] Run: systemctl --version
	I0815 23:06:05.101347   20724 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:06:05.267014   20724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:06:05.273781   20724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:06:05.273869   20724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:06:05.289994   20724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 23:06:05.290019   20724 start.go:495] detecting cgroup driver to use...
	I0815 23:06:05.290079   20724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:06:05.306594   20724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:06:05.321261   20724 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:06:05.321329   20724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:06:05.335681   20724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:06:05.349863   20724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:06:05.468337   20724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:06:05.633912   20724 docker.go:233] disabling docker service ...
	I0815 23:06:05.633989   20724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:06:05.648827   20724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:06:05.662175   20724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:06:05.785120   20724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:06:05.906861   20724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:06:05.921576   20724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:06:05.940062   20724 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:06:05.940120   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:05.951117   20724 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:06:05.951177   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:05.961972   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:05.973232   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:05.984841   20724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:06:05.995999   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:06.007017   20724 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:06.024835   20724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:06:06.036277   20724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:06:06.046555   20724 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 23:06:06.046609   20724 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 23:06:06.064414   20724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:06:06.075463   20724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:06:06.203560   20724 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:06:06.342762   20724 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:06:06.342856   20724 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:06:06.348108   20724 start.go:563] Will wait 60s for crictl version
	I0815 23:06:06.348179   20724 ssh_runner.go:195] Run: which crictl
	I0815 23:06:06.352199   20724 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:06:06.394874   20724 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:06:06.395030   20724 ssh_runner.go:195] Run: crio --version
	I0815 23:06:06.423477   20724 ssh_runner.go:195] Run: crio --version
	I0815 23:06:06.454105   20724 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:06:06.455653   20724 main.go:141] libmachine: (addons-517040) Calling .GetIP
	I0815 23:06:06.458156   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:06.458574   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:06.458593   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:06.458849   20724 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:06:06.463103   20724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:06:06.475568   20724 kubeadm.go:883] updating cluster {Name:addons-517040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 23:06:06.475665   20724 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:06:06.475716   20724 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:06:06.507434   20724 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 23:06:06.507503   20724 ssh_runner.go:195] Run: which lz4
	I0815 23:06:06.511640   20724 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 23:06:06.516044   20724 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 23:06:06.516075   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 23:06:07.811863   20724 crio.go:462] duration metric: took 1.300248738s to copy over tarball
	I0815 23:06:07.811948   20724 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 23:06:10.065094   20724 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.253115362s)
	I0815 23:06:10.065122   20724 crio.go:469] duration metric: took 2.253234314s to extract the tarball
	I0815 23:06:10.065129   20724 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 23:06:10.102357   20724 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:06:10.143841   20724 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:06:10.143862   20724 cache_images.go:84] Images are preloaded, skipping loading
	I0815 23:06:10.143869   20724 kubeadm.go:934] updating node { 192.168.39.72 8443 v1.31.0 crio true true} ...
	I0815 23:06:10.143980   20724 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-517040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:06:10.144057   20724 ssh_runner.go:195] Run: crio config
	I0815 23:06:10.196704   20724 cni.go:84] Creating CNI manager for ""
	I0815 23:06:10.196728   20724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 23:06:10.196741   20724 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 23:06:10.196760   20724 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.72 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-517040 NodeName:addons-517040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 23:06:10.196930   20724 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.72
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-517040"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.72
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.72"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 23:06:10.196998   20724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:06:10.207497   20724 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 23:06:10.207566   20724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 23:06:10.217765   20724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0815 23:06:10.234807   20724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:06:10.251575   20724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0815 23:06:10.268168   20724 ssh_runner.go:195] Run: grep 192.168.39.72	control-plane.minikube.internal$ /etc/hosts
	I0815 23:06:10.272116   20724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.72	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:06:10.284591   20724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:06:10.410176   20724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:06:10.428192   20724 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040 for IP: 192.168.39.72
	I0815 23:06:10.428221   20724 certs.go:194] generating shared ca certs ...
	I0815 23:06:10.428240   20724 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.428411   20724 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:06:10.719434   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt ...
	I0815 23:06:10.719464   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt: {Name:mk35b78ed0b44898f8fccf955c44667fbeeb3aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.719651   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key ...
	I0815 23:06:10.719665   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key: {Name:mkfe2022f0e76b2546b591d45db0a65a8271ee44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.719777   20724 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:06:10.820931   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt ...
	I0815 23:06:10.820954   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt: {Name:mk18b07ecaae2f5d5b1d2b1190f207b4fbce25e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.821127   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key ...
	I0815 23:06:10.821141   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key: {Name:mk11e730b8ff488a6904be1f740c8f279d9244f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:10.821232   20724 certs.go:256] generating profile certs ...
	I0815 23:06:10.821283   20724 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.key
	I0815 23:06:10.821300   20724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt with IP's: []
	I0815 23:06:11.058179   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt ...
	I0815 23:06:11.058208   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: {Name:mk0e4ed4fc2b71853657271fe26848031c301741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.058411   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.key ...
	I0815 23:06:11.058425   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.key: {Name:mk74afd4b015ae7865aff62b69ff6fef7f3be912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.058525   20724 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key.f0322d93
	I0815 23:06:11.058546   20724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt.f0322d93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.72]
	I0815 23:06:11.391778   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt.f0322d93 ...
	I0815 23:06:11.391805   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt.f0322d93: {Name:mk9418b889dd16c96ce3fb0bd06373511a63ef74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.391980   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key.f0322d93 ...
	I0815 23:06:11.391998   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key.f0322d93: {Name:mkb80e5a4d71da08a29f1a1e1ce7880ea6fdcd85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.392097   20724 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt.f0322d93 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt
	I0815 23:06:11.392170   20724 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key.f0322d93 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key
	I0815 23:06:11.392216   20724 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.key
	I0815 23:06:11.392232   20724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.crt with IP's: []
	I0815 23:06:11.595480   20724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.crt ...
	I0815 23:06:11.595512   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.crt: {Name:mk3ee1a0e073c1bcf6aa89f1933fd66ca093a883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.595675   20724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.key ...
	I0815 23:06:11.595686   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.key: {Name:mk8f27836cb30023d270b9e91aad5ed309ae2b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:11.595851   20724 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:06:11.595883   20724 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:06:11.595908   20724 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:06:11.595931   20724 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:06:11.596479   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:06:11.627542   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:06:11.652808   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:06:11.677082   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:06:11.702600   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 23:06:11.727819   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 23:06:11.753020   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:06:11.778942   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:06:11.804017   20724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:06:11.831711   20724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 23:06:11.859362   20724 ssh_runner.go:195] Run: openssl version
	I0815 23:06:11.865854   20724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:06:11.877023   20724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:06:11.881891   20724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:06:11.881966   20724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:06:11.891176   20724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:06:11.903120   20724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:06:11.907440   20724 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 23:06:11.907493   20724 kubeadm.go:392] StartCluster: {Name:addons-517040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-517040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:06:11.907581   20724 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 23:06:11.907630   20724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 23:06:11.945176   20724 cri.go:89] found id: ""
	I0815 23:06:11.945238   20724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 23:06:11.956098   20724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 23:06:11.966836   20724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 23:06:11.977278   20724 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 23:06:11.977298   20724 kubeadm.go:157] found existing configuration files:
	
	I0815 23:06:11.977349   20724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 23:06:11.987186   20724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 23:06:11.987241   20724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 23:06:11.997681   20724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 23:06:12.007159   20724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 23:06:12.007220   20724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 23:06:12.017495   20724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 23:06:12.027205   20724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 23:06:12.027251   20724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 23:06:12.037410   20724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 23:06:12.047285   20724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 23:06:12.047354   20724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 23:06:12.057717   20724 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 23:06:12.110771   20724 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 23:06:12.110846   20724 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 23:06:12.215755   20724 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 23:06:12.215856   20724 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 23:06:12.215967   20724 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 23:06:12.227259   20724 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 23:06:12.321650   20724 out.go:235]   - Generating certificates and keys ...
	I0815 23:06:12.321788   20724 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 23:06:12.321854   20724 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 23:06:12.496664   20724 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 23:06:12.640291   20724 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 23:06:12.973409   20724 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 23:06:13.160236   20724 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 23:06:13.350509   20724 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 23:06:13.350677   20724 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-517040 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0815 23:06:13.524485   20724 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 23:06:13.524640   20724 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-517040 localhost] and IPs [192.168.39.72 127.0.0.1 ::1]
	I0815 23:06:13.907858   20724 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 23:06:14.056732   20724 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 23:06:14.274345   20724 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 23:06:14.274485   20724 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 23:06:14.406585   20724 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 23:06:14.617253   20724 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 23:06:14.962491   20724 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 23:06:15.190333   20724 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 23:06:15.305454   20724 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 23:06:15.305984   20724 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 23:06:15.308554   20724 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 23:06:15.310909   20724 out.go:235]   - Booting up control plane ...
	I0815 23:06:15.311013   20724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 23:06:15.311101   20724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 23:06:15.311169   20724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 23:06:15.326432   20724 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 23:06:15.333932   20724 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 23:06:15.334021   20724 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 23:06:15.482468   20724 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 23:06:15.482639   20724 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 23:06:15.983865   20724 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.010486ms
	I0815 23:06:15.983985   20724 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 23:06:21.482798   20724 kubeadm.go:310] [api-check] The API server is healthy after 5.502068291s
	I0815 23:06:21.495713   20724 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 23:06:21.512179   20724 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 23:06:21.553546   20724 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 23:06:21.553760   20724 kubeadm.go:310] [mark-control-plane] Marking the node addons-517040 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 23:06:21.566832   20724 kubeadm.go:310] [bootstrap-token] Using token: oyfjn4.4onx040evbjr30d7
	I0815 23:06:21.568107   20724 out.go:235]   - Configuring RBAC rules ...
	I0815 23:06:21.568244   20724 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 23:06:21.575869   20724 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 23:06:21.587706   20724 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 23:06:21.592536   20724 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 23:06:21.596190   20724 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 23:06:21.599812   20724 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 23:06:21.889737   20724 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 23:06:22.328573   20724 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 23:06:22.889112   20724 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 23:06:22.890075   20724 kubeadm.go:310] 
	I0815 23:06:22.890149   20724 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 23:06:22.890158   20724 kubeadm.go:310] 
	I0815 23:06:22.890254   20724 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 23:06:22.890266   20724 kubeadm.go:310] 
	I0815 23:06:22.890292   20724 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 23:06:22.890362   20724 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 23:06:22.890433   20724 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 23:06:22.890444   20724 kubeadm.go:310] 
	I0815 23:06:22.890512   20724 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 23:06:22.890522   20724 kubeadm.go:310] 
	I0815 23:06:22.890602   20724 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 23:06:22.890624   20724 kubeadm.go:310] 
	I0815 23:06:22.890701   20724 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 23:06:22.890809   20724 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 23:06:22.890900   20724 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 23:06:22.890909   20724 kubeadm.go:310] 
	I0815 23:06:22.891041   20724 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 23:06:22.891148   20724 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 23:06:22.891159   20724 kubeadm.go:310] 
	I0815 23:06:22.891260   20724 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oyfjn4.4onx040evbjr30d7 \
	I0815 23:06:22.891388   20724 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0815 23:06:22.891417   20724 kubeadm.go:310] 	--control-plane 
	I0815 23:06:22.891422   20724 kubeadm.go:310] 
	I0815 23:06:22.891528   20724 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 23:06:22.891539   20724 kubeadm.go:310] 
	I0815 23:06:22.891653   20724 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oyfjn4.4onx040evbjr30d7 \
	I0815 23:06:22.891808   20724 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0815 23:06:22.892791   20724 kubeadm.go:310] W0815 23:06:12.091310     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 23:06:22.893205   20724 kubeadm.go:310] W0815 23:06:12.092085     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 23:06:22.893358   20724 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 23:06:22.893415   20724 cni.go:84] Creating CNI manager for ""
	I0815 23:06:22.893428   20724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 23:06:22.895286   20724 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 23:06:22.897034   20724 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 23:06:22.908911   20724 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 23:06:22.929168   20724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 23:06:22.929253   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:22.929280   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-517040 minikube.k8s.io/updated_at=2024_08_15T23_06_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=addons-517040 minikube.k8s.io/primary=true
	I0815 23:06:23.080016   20724 ops.go:34] apiserver oom_adj: -16
	I0815 23:06:23.080168   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:23.580268   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:24.080871   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:24.580819   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:25.080853   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:25.580829   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:26.080783   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:26.580806   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:27.081084   20724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:06:27.194036   20724 kubeadm.go:1113] duration metric: took 4.264833487s to wait for elevateKubeSystemPrivileges
	I0815 23:06:27.194074   20724 kubeadm.go:394] duration metric: took 15.286584162s to StartCluster
	I0815 23:06:27.194097   20724 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:27.194240   20724 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:06:27.194718   20724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:06:27.194953   20724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 23:06:27.194980   20724 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.72 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:06:27.195054   20724 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 23:06:27.195150   20724 addons.go:69] Setting yakd=true in profile "addons-517040"
	I0815 23:06:27.195156   20724 addons.go:69] Setting helm-tiller=true in profile "addons-517040"
	I0815 23:06:27.195165   20724 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-517040"
	I0815 23:06:27.195182   20724 addons.go:234] Setting addon yakd=true in "addons-517040"
	I0815 23:06:27.195176   20724 addons.go:69] Setting ingress=true in profile "addons-517040"
	I0815 23:06:27.195187   20724 config.go:182] Loaded profile config "addons-517040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:06:27.195200   20724 addons.go:69] Setting cloud-spanner=true in profile "addons-517040"
	I0815 23:06:27.195210   20724 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-517040"
	I0815 23:06:27.195214   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195214   20724 addons.go:234] Setting addon ingress=true in "addons-517040"
	I0815 23:06:27.195218   20724 addons.go:234] Setting addon cloud-spanner=true in "addons-517040"
	I0815 23:06:27.195191   20724 addons.go:234] Setting addon helm-tiller=true in "addons-517040"
	I0815 23:06:27.195243   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195250   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195254   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195258   20724 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-517040"
	I0815 23:06:27.195279   20724 addons.go:69] Setting default-storageclass=true in profile "addons-517040"
	I0815 23:06:27.195295   20724 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-517040"
	I0815 23:06:27.195314   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195314   20724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-517040"
	I0815 23:06:27.195347   20724 addons.go:69] Setting registry=true in profile "addons-517040"
	I0815 23:06:27.195365   20724 addons.go:234] Setting addon registry=true in "addons-517040"
	I0815 23:06:27.195383   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195250   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195637   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195648   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195659   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195660   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195671   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195676   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195743   20724 addons.go:69] Setting inspektor-gadget=true in profile "addons-517040"
	I0815 23:06:27.195743   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195752   20724 addons.go:69] Setting gcp-auth=true in profile "addons-517040"
	I0815 23:06:27.195755   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195763   20724 addons.go:234] Setting addon inspektor-gadget=true in "addons-517040"
	I0815 23:06:27.195766   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195769   20724 mustload.go:65] Loading cluster: addons-517040
	I0815 23:06:27.195784   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195785   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.195838   20724 addons.go:69] Setting metrics-server=true in profile "addons-517040"
	I0815 23:06:27.195844   20724 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-517040"
	I0815 23:06:27.195859   20724 addons.go:234] Setting addon metrics-server=true in "addons-517040"
	I0815 23:06:27.195863   20724 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-517040"
	I0815 23:06:27.195871   20724 addons.go:69] Setting storage-provisioner=true in profile "addons-517040"
	I0815 23:06:27.195884   20724 addons.go:69] Setting volcano=true in profile "addons-517040"
	I0815 23:06:27.195888   20724 addons.go:234] Setting addon storage-provisioner=true in "addons-517040"
	I0815 23:06:27.195901   20724 addons.go:69] Setting volumesnapshots=true in profile "addons-517040"
	I0815 23:06:27.195903   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.195905   20724 addons.go:234] Setting addon volcano=true in "addons-517040"
	I0815 23:06:27.195918   20724 addons.go:234] Setting addon volumesnapshots=true in "addons-517040"
	I0815 23:06:27.195919   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.195932   20724 config.go:182] Loaded profile config "addons-517040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:06:27.196019   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196060   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196099   20724 addons.go:69] Setting ingress-dns=true in profile "addons-517040"
	I0815 23:06:27.196134   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196143   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196109   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.196163   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196149   20724 addons.go:234] Setting addon ingress-dns=true in "addons-517040"
	I0815 23:06:27.196165   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196266   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.196241   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.196513   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196540   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196567   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196590   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196778   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.196804   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.196883   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.196912   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.197163   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.197189   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.197265   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.197291   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.197545   20724 out.go:177] * Verifying Kubernetes components...
	I0815 23:06:27.199164   20724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:06:27.216755   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I0815 23:06:27.217213   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0815 23:06:27.217396   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.217479   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I0815 23:06:27.217612   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.217815   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.218047   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.218061   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.218126   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.218141   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.218312   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.218332   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.218819   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.218834   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.219337   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.219874   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.219914   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.220544   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.220572   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.220913   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.221073   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.221499   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.221526   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.222189   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.222220   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.232801   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41185
	I0815 23:06:27.233477   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.234241   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.234262   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.234676   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.235316   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.235355   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.236208   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I0815 23:06:27.236729   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.237328   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.237346   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.237741   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.238392   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.238429   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.240170   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
	I0815 23:06:27.240685   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.241199   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.241220   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.241584   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.242166   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.242203   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.252009   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
	I0815 23:06:27.252563   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.253245   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.253267   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.253751   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.254000   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.255359   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0815 23:06:27.255929   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.256723   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.256953   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:27.256975   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:27.258962   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0815 23:06:27.258963   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:27.259105   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.259113   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:27.259120   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.259123   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:27.259132   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:27.259139   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:27.259409   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:27.259442   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:27.259450   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	W0815 23:06:27.259534   20724 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 23:06:27.259758   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.259950   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.261048   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0815 23:06:27.261362   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.261443   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.261839   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.261880   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.262018   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.262028   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.262478   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.262528   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.262642   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.264423   20724 addons.go:234] Setting addon default-storageclass=true in "addons-517040"
	I0815 23:06:27.264468   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.264469   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.264874   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.264905   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.265426   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.265457   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.266658   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 23:06:27.268114   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 23:06:27.269178   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41731
	I0815 23:06:27.269364   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45037
	I0815 23:06:27.269730   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.270280   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.270297   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.270669   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.271175   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 23:06:27.271239   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.271259   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.271943   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33029
	I0815 23:06:27.272442   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.272996   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.273015   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.273412   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.273797   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 23:06:27.274014   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.274033   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.274235   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0815 23:06:27.274684   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.275268   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.275285   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.275707   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.275763   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0815 23:06:27.276129   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 23:06:27.276425   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.276450   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.276484   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.276977   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.276994   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.277021   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.277551   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.278147   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.278182   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.278810   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.278829   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.278903   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0815 23:06:27.279363   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.279970   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.279986   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.280382   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.280629   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.281644   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 23:06:27.282199   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.282775   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.282814   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.283033   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.283357   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.283385   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.284594   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 23:06:27.285925   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 23:06:27.287029   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 23:06:27.287050   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 23:06:27.287073   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.290652   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.291299   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.291321   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.291557   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.291783   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.291959   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.292152   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.292697   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45015
	I0815 23:06:27.293426   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.295182   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I0815 23:06:27.295595   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.296122   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.296137   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.296583   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.297160   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.297198   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.297910   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.297935   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.299953   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0815 23:06:27.300563   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.300996   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.301011   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.301417   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.301620   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0815 23:06:27.301790   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.302102   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.302440   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.304094   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.304707   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.304724   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.305094   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.305273   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.307201   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.307666   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35729
	I0815 23:06:27.308153   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.308426   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0815 23:06:27.308893   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.308909   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.309251   20724 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0815 23:06:27.309497   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.310147   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.310184   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.310360   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.310393   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.310743   20724 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 23:06:27.310762   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 23:06:27.310781   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.311520   20724 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-517040"
	I0815 23:06:27.311562   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:27.311913   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.311951   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.312033   20724 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 23:06:27.312158   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44985
	I0815 23:06:27.312363   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.312384   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.313213   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.313214   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.313861   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.313879   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.314334   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.314528   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.314937   20724 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 23:06:27.316170   20724 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 23:06:27.316188   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 23:06:27.316208   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.316314   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.316623   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.318198   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I0815 23:06:27.318660   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.319265   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.319290   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.320219   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.320448   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.320453   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.321122   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.322086   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0815 23:06:27.322150   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.322460   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.322882   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.322906   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.323402   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.323972   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.324040   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.324203   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I0815 23:06:27.324478   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.324542   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.324617   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.324777   20724 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 23:06:27.324884   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.325169   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.325360   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.325523   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.325593   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.325789   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.326106   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.326130   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.326177   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41007
	I0815 23:06:27.326386   20724 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 23:06:27.326435   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.326446   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.326531   20724 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 23:06:27.326546   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 23:06:27.326562   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.326576   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.326629   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.326643   20724 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 23:06:27.326747   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.327263   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.327492   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.327548   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0815 23:06:27.327644   20724 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 23:06:27.327659   20724 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 23:06:27.327678   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.328354   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 23:06:27.328370   20724 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 23:06:27.328391   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.328538   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.329017   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.329038   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.329334   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.329504   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.330275   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.330712   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.330733   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.330893   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.330940   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0815 23:06:27.331144   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.331270   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.331305   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.331451   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.332276   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.332293   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.332745   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.332901   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.332954   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.333284   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I0815 23:06:27.333423   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.333447   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.333605   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.333813   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.333827   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.333932   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.334225   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.334363   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.334556   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.335196   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.335214   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.335233   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.335926   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.335932   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.335939   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38253
	I0815 23:06:27.335983   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.336010   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0815 23:06:27.336030   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.336073   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.336313   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.336329   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.336676   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.336412   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.336460   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.336478   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.337023   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.337104   20724 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 23:06:27.337251   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.337266   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.337327   20724 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 23:06:27.337340   20724 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 23:06:27.337355   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.337393   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.337404   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.337568   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.337583   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.337921   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.338119   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.338373   20724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 23:06:27.338378   20724 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 23:06:27.338373   20724 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 23:06:27.338391   20724 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 23:06:27.338465   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.338979   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.339549   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:27.339627   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 23:06:27.339636   20724 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 23:06:27.339648   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.339600   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:27.340869   20724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 23:06:27.342326   20724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 23:06:27.343930   20724 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 23:06:27.343953   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 23:06:27.345923   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.346013   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346045   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.346076   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346093   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.346144   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346167   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.346189   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346204   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.346244   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.346282   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346299   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.346313   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.346316   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.346330   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.346373   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.346455   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.346492   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.346617   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.346674   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.346722   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.346967   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.347249   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.347459   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.347715   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.348482   20724 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 23:06:27.349298   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.349415   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.349699   20724 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 23:06:27.349716   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 23:06:27.349719   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.349734   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.349739   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.349837   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.350044   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.350181   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.350600   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.351092   20724 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 23:06:27.352465   20724 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 23:06:27.352477   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 23:06:27.352490   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.353189   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.353671   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.353708   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.353883   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.354044   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.354198   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.354311   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.356048   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.356084   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.356105   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.356168   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.356354   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.356509   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.356672   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.357152   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37473
	I0815 23:06:27.357473   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.358010   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.358037   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.358365   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.358521   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43299
	I0815 23:06:27.358662   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.358916   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:27.359393   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:27.359415   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:27.359795   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:27.361944   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.361994   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:27.363527   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:27.363840   20724 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 23:06:27.365155   20724 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 23:06:27.365241   20724 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 23:06:27.365256   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 23:06:27.365275   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.367892   20724 out.go:177]   - Using image docker.io/busybox:stable
	I0815 23:06:27.368872   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.369322   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.369344   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.369410   20724 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 23:06:27.369424   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 23:06:27.369440   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:27.369504   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.369663   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.369793   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.369953   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.372333   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.372659   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:27.372684   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:27.372850   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:27.373136   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:27.373280   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:27.373474   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:27.664327   20724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:06:27.664410   20724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 23:06:27.724251   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 23:06:27.746343   20724 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 23:06:27.746360   20724 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 23:06:27.788769   20724 node_ready.go:35] waiting up to 6m0s for node "addons-517040" to be "Ready" ...
	I0815 23:06:27.789613   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 23:06:27.789633   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 23:06:27.791802   20724 node_ready.go:49] node "addons-517040" has status "Ready":"True"
	I0815 23:06:27.791818   20724 node_ready.go:38] duration metric: took 3.025383ms for node "addons-517040" to be "Ready" ...
	I0815 23:06:27.791826   20724 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:06:27.799739   20724 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-frrxx" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:27.839141   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 23:06:27.892847   20724 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 23:06:27.892874   20724 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 23:06:27.908151   20724 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 23:06:27.908175   20724 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 23:06:27.908943   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 23:06:27.957701   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 23:06:27.972085   20724 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 23:06:27.972105   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 23:06:27.977647   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 23:06:27.984056   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 23:06:27.984079   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 23:06:27.985289   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 23:06:27.985305   20724 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 23:06:27.994984   20724 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 23:06:27.995007   20724 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 23:06:27.996823   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 23:06:28.056753   20724 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 23:06:28.056785   20724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 23:06:28.061361   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 23:06:28.105306   20724 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 23:06:28.105322   20724 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 23:06:28.170545   20724 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 23:06:28.170572   20724 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 23:06:28.171829   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 23:06:28.171842   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 23:06:28.187642   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 23:06:28.192318   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 23:06:28.192344   20724 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 23:06:28.204582   20724 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 23:06:28.204603   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 23:06:28.382770   20724 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 23:06:28.382792   20724 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 23:06:28.386261   20724 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 23:06:28.386288   20724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 23:06:28.387126   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 23:06:28.387141   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 23:06:28.434875   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 23:06:28.434894   20724 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 23:06:28.454616   20724 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 23:06:28.454641   20724 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 23:06:28.460499   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 23:06:28.583555   20724 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 23:06:28.583580   20724 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 23:06:28.589387   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 23:06:28.589407   20724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 23:06:28.616404   20724 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 23:06:28.616428   20724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 23:06:28.738774   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 23:06:28.744813   20724 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 23:06:28.744831   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 23:06:28.785786   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 23:06:28.785806   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 23:06:28.850585   20724 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 23:06:28.850618   20724 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 23:06:28.960740   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 23:06:28.988827   20724 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 23:06:28.988858   20724 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 23:06:29.204574   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 23:06:29.204599   20724 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 23:06:29.318008   20724 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 23:06:29.318036   20724 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 23:06:29.403661   20724 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 23:06:29.403680   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 23:06:29.478833   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 23:06:29.478855   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 23:06:29.602942   20724 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 23:06:29.602965   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 23:06:29.737502   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 23:06:29.807082   20724 pod_ready.go:103] pod "coredns-6f6b679f8f-frrxx" in "kube-system" namespace has status "Ready":"False"
	I0815 23:06:29.826176   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 23:06:29.826196   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 23:06:29.881818   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 23:06:30.097328   20724 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 23:06:30.097354   20724 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 23:06:30.190703   20724 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.526260081s)
	I0815 23:06:30.190741   20724 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 23:06:30.190793   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.466513863s)
	I0815 23:06:30.190853   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:30.190869   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:30.191269   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:30.191271   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:30.191290   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:30.191300   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:30.191312   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:30.191545   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:30.191599   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:30.191618   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:30.204624   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:30.204644   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:30.204883   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:30.204904   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:30.392925   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 23:06:30.695021   20724 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-517040" context rescaled to 1 replicas
	I0815 23:06:31.421309   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.582127006s)
	I0815 23:06:31.421365   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:31.421378   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:31.421753   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:31.421754   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:31.421785   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:31.421799   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:31.421816   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:31.422049   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:31.422063   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:31.977761   20724 pod_ready.go:93] pod "coredns-6f6b679f8f-frrxx" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:31.977784   20724 pod_ready.go:82] duration metric: took 4.17800836s for pod "coredns-6f6b679f8f-frrxx" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:31.977795   20724 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mtm8z" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:32.196153   20724 pod_ready.go:93] pod "coredns-6f6b679f8f-mtm8z" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:32.196175   20724 pod_ready.go:82] duration metric: took 218.373877ms for pod "coredns-6f6b679f8f-mtm8z" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:32.196184   20724 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:33.330871   20724 pod_ready.go:93] pod "etcd-addons-517040" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:33.330893   20724 pod_ready.go:82] duration metric: took 1.13470316s for pod "etcd-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:33.330904   20724 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:33.894328   20724 pod_ready.go:93] pod "kube-apiserver-addons-517040" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:33.894349   20724 pod_ready.go:82] duration metric: took 563.438426ms for pod "kube-apiserver-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:33.894360   20724 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:34.379799   20724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 23:06:34.379843   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:34.382681   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:34.383127   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:34.383156   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:34.383363   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:34.383554   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:34.383690   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:34.383859   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:34.965508   20724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 23:06:35.240997   20724 addons.go:234] Setting addon gcp-auth=true in "addons-517040"
	I0815 23:06:35.241056   20724 host.go:66] Checking if "addons-517040" exists ...
	I0815 23:06:35.241458   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:35.241488   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:35.256751   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0815 23:06:35.257219   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:35.257781   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:35.257809   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:35.258160   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:35.258773   20724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:06:35.258806   20724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:06:35.274597   20724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I0815 23:06:35.275039   20724 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:06:35.275554   20724 main.go:141] libmachine: Using API Version  1
	I0815 23:06:35.275582   20724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:06:35.275975   20724 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:06:35.276180   20724 main.go:141] libmachine: (addons-517040) Calling .GetState
	I0815 23:06:35.277931   20724 main.go:141] libmachine: (addons-517040) Calling .DriverName
	I0815 23:06:35.278184   20724 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 23:06:35.278212   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHHostname
	I0815 23:06:35.280818   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:35.281237   20724 main.go:141] libmachine: (addons-517040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:98:d5", ip: ""} in network mk-addons-517040: {Iface:virbr1 ExpiryTime:2024-08-16 00:05:56 +0000 UTC Type:0 Mac:52:54:00:df:98:d5 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:addons-517040 Clientid:01:52:54:00:df:98:d5}
	I0815 23:06:35.281289   20724 main.go:141] libmachine: (addons-517040) DBG | domain addons-517040 has defined IP address 192.168.39.72 and MAC address 52:54:00:df:98:d5 in network mk-addons-517040
	I0815 23:06:35.281513   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHPort
	I0815 23:06:35.281715   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHKeyPath
	I0815 23:06:35.281902   20724 main.go:141] libmachine: (addons-517040) Calling .GetSSHUsername
	I0815 23:06:35.282049   20724 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/addons-517040/id_rsa Username:docker}
	I0815 23:06:35.678531   20724 pod_ready.go:93] pod "kube-controller-manager-addons-517040" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:35.678562   20724 pod_ready.go:82] duration metric: took 1.78419486s for pod "kube-controller-manager-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.678579   20724 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cg5sj" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.774000   20724 pod_ready.go:93] pod "kube-proxy-cg5sj" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:35.774026   20724 pod_ready.go:82] duration metric: took 95.438465ms for pod "kube-proxy-cg5sj" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.774039   20724 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.826526   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.917546473s)
	I0815 23:06:35.826584   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.826596   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.826628   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.868895718s)
	I0815 23:06:35.826662   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.826677   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.826685   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.849013533s)
	I0815 23:06:35.826707   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.826718   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.826780   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.829930542s)
	I0815 23:06:35.826819   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.826835   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827022   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827047   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827056   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827066   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827072   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827076   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827080   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827081   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827095   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827086   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827142   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827153   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827161   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.765776081s)
	I0815 23:06:35.827169   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827177   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827177   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827185   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827186   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827196   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827257   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.639577874s)
	I0815 23:06:35.827282   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827291   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827349   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.366826699s)
	I0815 23:06:35.827362   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827370   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827451   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827458   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.088659648s)
	I0815 23:06:35.827470   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827476   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827478   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827484   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827493   20724 addons.go:475] Verifying addon ingress=true in "addons-517040"
	I0815 23:06:35.827542   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.866778353s)
	I0815 23:06:35.827598   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827616   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827637   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827644   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827652   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827659   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827736   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827760   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.827778   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827789   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.827797   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827802   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.827805   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827810   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.827813   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.827817   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.828168   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.828189   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.828206   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.828218   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.828580   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.828625   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.828633   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.828732   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.828752   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.828764   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.828784   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.828791   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.829212   20724 out.go:177] * Verifying ingress addon...
	I0815 23:06:35.829965   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.829982   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.829990   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.829998   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.830049   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.830067   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.830077   20724 addons.go:475] Verifying addon metrics-server=true in "addons-517040"
	I0815 23:06:35.830928   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.830963   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.830970   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.831642   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.831660   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.831724   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.831778   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.831785   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.832576   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.832595   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.832604   20724 addons.go:475] Verifying addon registry=true in "addons-517040"
	I0815 23:06:35.827557   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.833018   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.833214   20724 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 23:06:35.833904   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.833925   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.833937   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.833954   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:35.833966   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:35.834173   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:35.834211   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:35.834223   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:35.834340   20724 out.go:177] * Verifying registry addon...
	I0815 23:06:35.835574   20724 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-517040 service yakd-dashboard -n yakd-dashboard
	
	I0815 23:06:35.836315   20724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 23:06:35.916088   20724 pod_ready.go:93] pod "kube-scheduler-addons-517040" in "kube-system" namespace has status "Ready":"True"
	I0815 23:06:35.916119   20724 pod_ready.go:82] duration metric: took 142.071089ms for pod "kube-scheduler-addons-517040" in "kube-system" namespace to be "Ready" ...
	I0815 23:06:35.916130   20724 pod_ready.go:39] duration metric: took 8.124291977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:06:35.916147   20724 api_server.go:52] waiting for apiserver process to appear ...
	I0815 23:06:35.916207   20724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:06:35.963068   20724 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 23:06:35.963088   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:35.963614   20724 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 23:06:35.963636   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:36.080785   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:36.080809   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:36.081083   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:36.081104   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:36.081121   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:36.453925   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:36.454448   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:36.632917   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.751058543s)
	I0815 23:06:36.632971   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:36.632986   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:36.633054   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.895495311s)
	W0815 23:06:36.633096   20724 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 23:06:36.633134   20724 retry.go:31] will retry after 302.814585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 23:06:36.633325   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:36.633341   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:36.633351   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:36.633365   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:36.633605   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:36.633621   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:36.633630   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:36.881432   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:36.882348   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:36.936353   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 23:06:37.340028   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:37.344305   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:37.864864   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.471881309s)
	I0815 23:06:37.864920   20724 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.586712652s)
	I0815 23:06:37.864954   20724 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.948726479s)
	I0815 23:06:37.864980   20724 api_server.go:72] duration metric: took 10.669971599s to wait for apiserver process to appear ...
	I0815 23:06:37.864989   20724 api_server.go:88] waiting for apiserver healthz status ...
	I0815 23:06:37.864922   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:37.865116   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:37.865007   20724 api_server.go:253] Checking apiserver healthz at https://192.168.39.72:8443/healthz ...
	I0815 23:06:37.865370   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:37.865398   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:37.865421   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:37.865448   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:37.865399   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:37.865764   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:37.865770   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:37.865780   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:37.865792   20724 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-517040"
	I0815 23:06:37.866818   20724 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 23:06:37.867909   20724 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 23:06:37.869530   20724 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 23:06:37.870532   20724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 23:06:37.870865   20724 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 23:06:37.870879   20724 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 23:06:37.899962   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:37.900128   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:37.909431   20724 api_server.go:279] https://192.168.39.72:8443/healthz returned 200:
	ok
	I0815 23:06:37.924754   20724 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 23:06:37.924785   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:37.925935   20724 api_server.go:141] control plane version: v1.31.0
	I0815 23:06:37.925965   20724 api_server.go:131] duration metric: took 60.968126ms to wait for apiserver health ...
	I0815 23:06:37.925976   20724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 23:06:37.979429   20724 system_pods.go:59] 19 kube-system pods found
	I0815 23:06:37.979469   20724 system_pods.go:61] "coredns-6f6b679f8f-frrxx" [4c35a93c-3c9b-4cea-92cb-531486f62524] Running
	I0815 23:06:37.979477   20724 system_pods.go:61] "coredns-6f6b679f8f-mtm8z" [d8f0df8d-c410-42be-8666-0163180a0538] Running
	I0815 23:06:37.979486   20724 system_pods.go:61] "csi-hostpath-attacher-0" [01dbe91e-f366-491a-8be7-b218b193563b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 23:06:37.979495   20724 system_pods.go:61] "csi-hostpath-resizer-0" [61b797df-07bd-4c06-b75c-c53c45041656] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 23:06:37.979511   20724 system_pods.go:61] "csi-hostpathplugin-czvm7" [54f95c32-b72c-4a0a-8cb5-ef390efa1828] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 23:06:37.979522   20724 system_pods.go:61] "etcd-addons-517040" [a383b556-e08e-4662-a12b-a216b451adae] Running
	I0815 23:06:37.979528   20724 system_pods.go:61] "kube-apiserver-addons-517040" [8cb5a50a-7182-4950-8536-1c9096d610b6] Running
	I0815 23:06:37.979533   20724 system_pods.go:61] "kube-controller-manager-addons-517040" [2621a39a-9e97-4529-9f42-14a71926f35b] Running
	I0815 23:06:37.979542   20724 system_pods.go:61] "kube-ingress-dns-minikube" [53e62c76-994b-4d37-9ac3-fada87d1d0c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0815 23:06:37.979551   20724 system_pods.go:61] "kube-proxy-cg5sj" [ede8c3a9-8c4a-44a9-b8d9-6db190ceae87] Running
	I0815 23:06:37.979560   20724 system_pods.go:61] "kube-scheduler-addons-517040" [b7257417-cd09-4f5b-ae64-4c2109240535] Running
	I0815 23:06:37.979572   20724 system_pods.go:61] "metrics-server-8988944d9-4mjqf" [f4e01981-c592-4b6b-a285-4046cf8c68c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 23:06:37.979584   20724 system_pods.go:61] "nvidia-device-plugin-daemonset-62jx9" [e1e1e2d3-eb2b-497d-9a69-d33c5428ad96] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0815 23:06:37.979595   20724 system_pods.go:61] "registry-6fb4cdfc84-g5m9x" [3fa1cd07-9f55-41bb-85a9-a958de7f5cbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 23:06:37.979603   20724 system_pods.go:61] "registry-proxy-h2mkz" [22fe5d24-ea50-43c5-a4bf-ee443e253852] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 23:06:37.979613   20724 system_pods.go:61] "snapshot-controller-56fcc65765-pldzx" [dadb75f8-7f43-4070-a2a0-42efc5ee3c44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 23:06:37.979625   20724 system_pods.go:61] "snapshot-controller-56fcc65765-ttz7q" [c029e2be-05d3-4a1b-8689-f32802401e3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 23:06:37.979633   20724 system_pods.go:61] "storage-provisioner" [a4cede15-f6e5-4422-a61f-260751693d94] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 23:06:37.979644   20724 system_pods.go:61] "tiller-deploy-b48cc5f79-frmxp" [662d1936-5dbb-49d3-a200-0d9f9d807bfe] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0815 23:06:37.979655   20724 system_pods.go:74] duration metric: took 53.67235ms to wait for pod list to return data ...
	I0815 23:06:37.979667   20724 default_sa.go:34] waiting for default service account to be created ...
	I0815 23:06:38.001036   20724 default_sa.go:45] found service account: "default"
	I0815 23:06:38.001072   20724 default_sa.go:55] duration metric: took 21.395911ms for default service account to be created ...
	I0815 23:06:38.001085   20724 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 23:06:38.029108   20724 system_pods.go:86] 19 kube-system pods found
	I0815 23:06:38.029139   20724 system_pods.go:89] "coredns-6f6b679f8f-frrxx" [4c35a93c-3c9b-4cea-92cb-531486f62524] Running
	I0815 23:06:38.029145   20724 system_pods.go:89] "coredns-6f6b679f8f-mtm8z" [d8f0df8d-c410-42be-8666-0163180a0538] Running
	I0815 23:06:38.029153   20724 system_pods.go:89] "csi-hostpath-attacher-0" [01dbe91e-f366-491a-8be7-b218b193563b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 23:06:38.029160   20724 system_pods.go:89] "csi-hostpath-resizer-0" [61b797df-07bd-4c06-b75c-c53c45041656] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 23:06:38.029166   20724 system_pods.go:89] "csi-hostpathplugin-czvm7" [54f95c32-b72c-4a0a-8cb5-ef390efa1828] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 23:06:38.029171   20724 system_pods.go:89] "etcd-addons-517040" [a383b556-e08e-4662-a12b-a216b451adae] Running
	I0815 23:06:38.029175   20724 system_pods.go:89] "kube-apiserver-addons-517040" [8cb5a50a-7182-4950-8536-1c9096d610b6] Running
	I0815 23:06:38.029179   20724 system_pods.go:89] "kube-controller-manager-addons-517040" [2621a39a-9e97-4529-9f42-14a71926f35b] Running
	I0815 23:06:38.029186   20724 system_pods.go:89] "kube-ingress-dns-minikube" [53e62c76-994b-4d37-9ac3-fada87d1d0c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0815 23:06:38.029191   20724 system_pods.go:89] "kube-proxy-cg5sj" [ede8c3a9-8c4a-44a9-b8d9-6db190ceae87] Running
	I0815 23:06:38.029195   20724 system_pods.go:89] "kube-scheduler-addons-517040" [b7257417-cd09-4f5b-ae64-4c2109240535] Running
	I0815 23:06:38.029202   20724 system_pods.go:89] "metrics-server-8988944d9-4mjqf" [f4e01981-c592-4b6b-a285-4046cf8c68c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 23:06:38.029213   20724 system_pods.go:89] "nvidia-device-plugin-daemonset-62jx9" [e1e1e2d3-eb2b-497d-9a69-d33c5428ad96] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0815 23:06:38.029222   20724 system_pods.go:89] "registry-6fb4cdfc84-g5m9x" [3fa1cd07-9f55-41bb-85a9-a958de7f5cbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 23:06:38.029227   20724 system_pods.go:89] "registry-proxy-h2mkz" [22fe5d24-ea50-43c5-a4bf-ee443e253852] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 23:06:38.029232   20724 system_pods.go:89] "snapshot-controller-56fcc65765-pldzx" [dadb75f8-7f43-4070-a2a0-42efc5ee3c44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 23:06:38.029239   20724 system_pods.go:89] "snapshot-controller-56fcc65765-ttz7q" [c029e2be-05d3-4a1b-8689-f32802401e3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 23:06:38.029245   20724 system_pods.go:89] "storage-provisioner" [a4cede15-f6e5-4422-a61f-260751693d94] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 23:06:38.029250   20724 system_pods.go:89] "tiller-deploy-b48cc5f79-frmxp" [662d1936-5dbb-49d3-a200-0d9f9d807bfe] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0815 23:06:38.029259   20724 system_pods.go:126] duration metric: took 28.168446ms to wait for k8s-apps to be running ...
	I0815 23:06:38.029266   20724 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 23:06:38.029309   20724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:06:38.106718   20724 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 23:06:38.106745   20724 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 23:06:38.245700   20724 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 23:06:38.245721   20724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 23:06:38.321313   20724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 23:06:38.337074   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:38.340116   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:38.375655   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:38.837459   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:38.839489   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:38.875875   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:39.337973   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:39.340838   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:39.377564   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:39.626804   20724 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.597472157s)
	I0815 23:06:39.626846   20724 system_svc.go:56] duration metric: took 1.597577114s WaitForService to wait for kubelet
	I0815 23:06:39.626858   20724 kubeadm.go:582] duration metric: took 12.43184741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:06:39.626881   20724 node_conditions.go:102] verifying NodePressure condition ...
	I0815 23:06:39.626813   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.690407741s)
	I0815 23:06:39.626960   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:39.626980   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:39.627368   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:39.627384   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:39.627403   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:39.627411   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:39.627634   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:39.627653   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:39.627667   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:39.630491   20724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:06:39.630512   20724 node_conditions.go:123] node cpu capacity is 2
	I0815 23:06:39.630526   20724 node_conditions.go:105] duration metric: took 3.638741ms to run NodePressure ...
	I0815 23:06:39.630539   20724 start.go:241] waiting for startup goroutines ...
	I0815 23:06:39.894802   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:39.909389   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:39.909613   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:40.072048   20724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.750694219s)
	I0815 23:06:40.072102   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:40.072117   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:40.072402   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:40.072423   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:40.072426   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:40.072433   20724 main.go:141] libmachine: Making call to close driver server
	I0815 23:06:40.072442   20724 main.go:141] libmachine: (addons-517040) Calling .Close
	I0815 23:06:40.072698   20724 main.go:141] libmachine: (addons-517040) DBG | Closing plugin on server side
	I0815 23:06:40.072729   20724 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:06:40.072742   20724 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:06:40.075046   20724 addons.go:475] Verifying addon gcp-auth=true in "addons-517040"
	I0815 23:06:40.077673   20724 out.go:177] * Verifying gcp-auth addon...
	I0815 23:06:40.079543   20724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 23:06:40.093612   20724 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 23:06:40.093637   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:40.338467   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:40.341267   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:40.377709   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:40.584233   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:40.838616   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:40.841721   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:40.875508   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:41.083782   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:41.349458   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:41.349870   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:41.375828   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:41.582636   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:41.838313   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:41.839952   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:41.874906   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:42.082589   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:42.337329   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:42.338669   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:42.375604   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:42.698862   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:42.838291   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:42.839898   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:42.876494   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:43.083914   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:43.338263   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:43.339842   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:43.376566   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:43.584036   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:43.838159   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:43.839938   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:43.875682   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:44.083404   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:44.337126   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:44.340899   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:44.375677   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:44.583474   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:44.837797   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:44.839639   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:44.875592   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:45.083241   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:45.337952   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:45.340135   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:45.376667   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:45.583972   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:45.837687   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:45.839347   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:45.875457   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:46.083487   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:46.343347   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:46.343410   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:46.443138   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:46.584108   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:46.838944   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:46.840733   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:46.875332   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:47.082989   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:47.337382   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:47.339503   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:47.375968   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:47.584314   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:47.838412   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:47.840156   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:47.875033   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:48.083534   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:48.336949   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:48.340008   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:48.376192   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:48.583998   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:49.302644   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:49.304769   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:49.304925   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:49.305029   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:49.337375   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:49.339836   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:49.375309   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:49.582807   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:49.837748   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:49.839147   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:49.875141   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:50.083372   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:50.336939   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:50.339292   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:50.375397   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:50.583954   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:50.838436   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:50.840068   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:50.876284   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:51.083964   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:51.340251   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:51.345444   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:51.376720   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:51.584231   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:51.838127   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:51.840165   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:51.876163   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:52.083037   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:52.337075   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:52.340037   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:52.375454   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:52.584061   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:52.838525   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:52.841532   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:52.876035   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:53.085277   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:53.340215   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:53.342882   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:53.375539   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:53.583526   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:53.839537   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:53.842395   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:53.888568   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:54.083379   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:54.337476   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:54.339806   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:54.375852   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:54.582998   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:54.838321   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:54.839707   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:54.875426   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:55.082568   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:55.342692   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:55.342831   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:55.375584   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:55.583030   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:55.837335   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:55.838771   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:55.875665   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:56.083492   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:56.339511   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:56.341633   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:56.377904   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:56.582717   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:56.838031   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:56.839965   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:56.874676   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:57.084416   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:57.338527   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:57.340011   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:57.376230   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:57.584020   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:57.850160   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:57.850468   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:57.952917   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:58.083122   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:58.337855   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:58.339786   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:58.375904   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:58.583599   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:58.846281   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:58.859304   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:58.946349   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:59.084463   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:59.337272   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:59.340238   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:59.380055   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:06:59.583892   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:06:59.837944   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:06:59.839660   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:06:59.875693   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:00.085471   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:00.337231   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:00.339412   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:00.375349   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:00.583102   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:00.837517   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:00.843220   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:00.875008   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:01.083884   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:01.338493   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:01.340475   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:01.375339   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:01.582983   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:01.838727   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:01.840252   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:01.876040   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:02.083995   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:02.338754   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:02.340022   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:02.375492   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:02.583755   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:02.839195   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:02.840198   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:02.875193   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:03.087477   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:03.338351   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:03.342592   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:03.377073   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:03.584240   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:03.839122   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:03.840253   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:03.874638   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:04.083148   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:04.337707   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:04.339236   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:04.374695   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:04.582445   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:04.838371   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:04.839901   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:04.876544   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:05.082888   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:05.337380   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:05.339022   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:05.375009   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:05.596159   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:05.837788   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:05.841449   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:05.874473   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:06.083053   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:06.337436   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:06.339639   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:06.375890   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:06.582673   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:06.915031   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:06.915032   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:06.916904   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:07.083932   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:07.337677   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:07.342120   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:07.375803   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:07.584555   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:07.837796   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:07.839507   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:07.875659   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:08.084063   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:08.338110   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:08.340452   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 23:07:08.375544   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:08.583357   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:08.838314   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:08.840022   20724 kapi.go:107] duration metric: took 33.003705789s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 23:07:08.874755   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:09.083315   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:09.337139   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:09.376070   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:09.583415   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:09.841396   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:09.875252   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:10.082862   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:10.337874   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:10.375378   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:10.582843   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:10.837873   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:10.875296   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:11.083576   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:11.338360   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:11.375039   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:11.583926   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:11.837892   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:11.875818   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:12.204074   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:12.339930   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:12.375930   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:12.585041   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:12.842123   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:12.883875   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:13.084143   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:13.340598   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:13.377507   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:13.587453   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:13.837874   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:13.874768   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:14.083350   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:14.337049   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:14.375416   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:14.582636   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:14.837695   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:14.874913   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:15.083281   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:15.336939   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:15.375566   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:15.598071   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:16.037502   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:16.038349   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:16.083240   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:16.341118   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:16.376717   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:16.583513   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:16.837268   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:16.874695   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:17.082471   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:17.337882   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:17.375511   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:17.584105   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:17.838206   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:17.876613   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:18.083831   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:18.338094   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:18.375400   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:18.583227   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:18.838300   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:18.875873   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:19.083042   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:19.341865   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:19.374706   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:19.583481   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:19.839067   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:19.941183   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:20.085275   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:20.338476   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:20.374848   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:20.583717   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:20.845411   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:20.875309   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:21.082789   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:21.338435   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:21.376114   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:21.584004   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:21.838481   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:21.882078   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:22.084871   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:22.338372   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:22.377119   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:22.583762   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:22.837720   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:22.876144   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:23.084717   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:23.509268   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:23.511201   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:23.610452   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:23.838438   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:23.875180   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:24.114139   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:24.338827   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:24.375453   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:24.583516   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:24.837512   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:24.875357   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:25.086986   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:25.338386   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:25.375580   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:25.582946   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:25.837692   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:25.875784   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:26.083424   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:26.337298   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:26.375791   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:26.584102   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:26.847591   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:26.955867   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:27.084951   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:27.338047   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:27.377186   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:27.583647   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:27.838769   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:27.875256   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:28.082645   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:28.338068   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:28.382245   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:28.584435   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:28.838775   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:28.876175   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:29.084675   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:29.337470   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:29.374433   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:29.583272   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:29.838421   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:29.874702   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:30.085036   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:30.346771   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:30.379734   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:30.583120   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:30.839653   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:30.875905   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:31.083502   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:31.337441   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:31.377221   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:31.588774   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:31.837563   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:31.874686   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:32.083593   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:32.338196   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:32.376126   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:32.583636   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:32.838131   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:32.875813   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:33.083324   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:33.337572   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:33.374756   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:33.583714   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:33.837615   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:33.875502   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:34.083978   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:34.338862   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:34.375854   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:34.584381   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:34.838528   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:34.876830   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 23:07:35.083511   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:35.337924   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:35.375772   20724 kapi.go:107] duration metric: took 57.505237794s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 23:07:35.583479   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:35.838225   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:36.084266   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:36.338212   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:36.583991   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:36.838197   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:37.083084   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:37.337675   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:37.584197   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:37.838174   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:38.083771   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:38.338026   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:38.582624   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:38.838017   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:39.083573   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:39.337992   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:39.583444   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:39.837945   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:40.089391   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:40.337436   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:40.583802   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:40.837388   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:41.082669   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:41.338910   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:41.583084   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:41.838657   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:42.417331   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:42.417901   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:42.583759   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:42.837115   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:43.082635   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:43.337294   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:43.582841   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:43.837323   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:44.083282   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:44.339312   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:44.792306   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:44.837781   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:45.083231   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:45.338315   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:45.583343   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:45.839170   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:46.085225   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:46.337904   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:46.583822   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:46.838767   20724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 23:07:47.083670   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:47.339992   20724 kapi.go:107] duration metric: took 1m11.506768588s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 23:07:47.582653   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:48.083974   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:48.586584   20724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 23:07:49.083032   20724 kapi.go:107] duration metric: took 1m9.003486345s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 23:07:49.084992   20724 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-517040 cluster.
	I0815 23:07:49.086394   20724 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 23:07:49.088085   20724 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 23:07:49.089344   20724 out.go:177] * Enabled addons: default-storageclass, ingress-dns, cloud-spanner, metrics-server, helm-tiller, storage-provisioner, nvidia-device-plugin, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0815 23:07:49.090595   20724 addons.go:510] duration metric: took 1m21.895542853s for enable addons: enabled=[default-storageclass ingress-dns cloud-spanner metrics-server helm-tiller storage-provisioner nvidia-device-plugin yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0815 23:07:49.090628   20724 start.go:246] waiting for cluster config update ...
	I0815 23:07:49.090642   20724 start.go:255] writing updated cluster config ...
	I0815 23:07:49.090881   20724 ssh_runner.go:195] Run: rm -f paused
	I0815 23:07:49.141221   20724 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 23:07:49.142897   20724 out.go:177] * Done! kubectl is now configured to use "addons-517040" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.525953079Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d257198f-cc7a-4aba-8744-4f74838d320c name=/runtime.v1.RuntimeService/Version
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.526904622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83ea48c0-5db1-43fb-9600-abcc55706e2e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.528120554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763604528095510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83ea48c0-5db1-43fb-9600-abcc55706e2e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.528870652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcd34055-f4cc-4480-a17f-bfbc411afcd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.528979684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcd34055-f4cc-4480-a17f-bfbc411afcd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.529285375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:587d41a9670abbcb893c28180eb87709353a9042620aede43cce0d4211917757,PodSandboxId:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723763483744240081,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7aaca643dfd9b8aa29f0cb69a7b703b8a0616b2bd0b0f757450625ea7a29456,PodSandboxId:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723763369708269617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee7004b42faa7de00787141893fa64b6dac5c9e7523e014b84096f5b32b7bf,PodSandboxId:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723763342415725772,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e94101b907952596fe98baa590eae3d59c7b0a9b547ff2676641e96dc7bfcd,PodSandboxId:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723763272566074617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676bba24daba93ea7fff4302bf45bc176524315dfa6ffdb45a4c8ce41f13738c,PodSandboxId:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723763232309967707,Labels:map[string]string{io.kubernetes.container.name: metrics-s
erver,io.kubernetes.pod.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08,PodSandboxId:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cr
eatedAt:1723763194859223677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3,PodSandboxId:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723763190463277
779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a,PodSandboxId:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6
494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723763187945446864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918,PodSandboxId:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723763176683271176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6,PodSandboxId:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723763176566199563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169,PodSandboxId:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04
5733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723763176597813746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03,PodSandboxId:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
04f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723763176518253342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcd34055-f4cc-4480-a17f-bfbc411afcd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.573773810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7eb27dfe-4be8-4719-875f-dfbecffb2f9d name=/runtime.v1.RuntimeService/Version
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.573870303Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7eb27dfe-4be8-4719-875f-dfbecffb2f9d name=/runtime.v1.RuntimeService/Version
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.575443154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3c588ac-3cd2-451b-bf9b-06f06133412d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.577771167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763604577717714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3c588ac-3cd2-451b-bf9b-06f06133412d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.578321496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfa678be-d2c1-4a99-b377-441d68cf64dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.578378912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfa678be-d2c1-4a99-b377-441d68cf64dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.578705114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:587d41a9670abbcb893c28180eb87709353a9042620aede43cce0d4211917757,PodSandboxId:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723763483744240081,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7aaca643dfd9b8aa29f0cb69a7b703b8a0616b2bd0b0f757450625ea7a29456,PodSandboxId:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723763369708269617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee7004b42faa7de00787141893fa64b6dac5c9e7523e014b84096f5b32b7bf,PodSandboxId:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723763342415725772,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e94101b907952596fe98baa590eae3d59c7b0a9b547ff2676641e96dc7bfcd,PodSandboxId:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723763272566074617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676bba24daba93ea7fff4302bf45bc176524315dfa6ffdb45a4c8ce41f13738c,PodSandboxId:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723763232309967707,Labels:map[string]string{io.kubernetes.container.name: metrics-s
erver,io.kubernetes.pod.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08,PodSandboxId:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cr
eatedAt:1723763194859223677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3,PodSandboxId:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723763190463277
779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a,PodSandboxId:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6
494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723763187945446864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918,PodSandboxId:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723763176683271176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6,PodSandboxId:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723763176566199563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169,PodSandboxId:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04
5733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723763176597813746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03,PodSandboxId:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
04f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723763176518253342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfa678be-d2c1-4a99-b377-441d68cf64dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.613524400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=275feeb8-b4e7-422b-8988-24380d9c542f name=/runtime.v1.RuntimeService/Version
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.613682327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=275feeb8-b4e7-422b-8988-24380d9c542f name=/runtime.v1.RuntimeService/Version
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.615392829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7c7e87a-de18-46c1-a5b8-85185b3b6227 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.616815263Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763604616787455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7c7e87a-de18-46c1-a5b8-85185b3b6227 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.617353034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=778ad15e-0998-4139-b5be-a8627c6cfa33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.617414821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=778ad15e-0998-4139-b5be-a8627c6cfa33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.617737036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:587d41a9670abbcb893c28180eb87709353a9042620aede43cce0d4211917757,PodSandboxId:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723763483744240081,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7aaca643dfd9b8aa29f0cb69a7b703b8a0616b2bd0b0f757450625ea7a29456,PodSandboxId:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723763369708269617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee7004b42faa7de00787141893fa64b6dac5c9e7523e014b84096f5b32b7bf,PodSandboxId:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723763342415725772,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e94101b907952596fe98baa590eae3d59c7b0a9b547ff2676641e96dc7bfcd,PodSandboxId:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723763272566074617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676bba24daba93ea7fff4302bf45bc176524315dfa6ffdb45a4c8ce41f13738c,PodSandboxId:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723763232309967707,Labels:map[string]string{io.kubernetes.container.name: metrics-s
erver,io.kubernetes.pod.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08,PodSandboxId:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cr
eatedAt:1723763194859223677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3,PodSandboxId:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723763190463277
779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a,PodSandboxId:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6
494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723763187945446864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918,PodSandboxId:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723763176683271176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6,PodSandboxId:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723763176566199563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169,PodSandboxId:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04
5733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723763176597813746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03,PodSandboxId:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
04f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723763176518253342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=778ad15e-0998-4139-b5be-a8627c6cfa33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.640885082Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8cfebceb-65de-4010-adf4-c85cb6d1da04 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.641190690Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bxccf,Uid:633a66a4-e3b2-442f-8b09-ab0c395605df,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763482792478471,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:11:22.159007124Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&PodSandboxMetadata{Name:headlamp-57fb76fcdb-lw8lr,Uid:81da26ef-ec50-4d25-9e68-5daf93bbc089,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763366221244921,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,pod-template-hash: 57fb76fcdb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:09:25.907170354Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&PodSandboxMetadata{Name:nginx,Uid:5c0b5079-ac0c-4418-9904-70626aa5e8a0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763339914283888,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
08-15T23:08:59.599159691Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&PodSandboxMetadata{Name:busybox,Uid:7a2fa3b2-791e-48ef-be92-888357fe9cdb,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763271422311810,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:07:51.094238821Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&PodSandboxMetadata{Name:metrics-server-8988944d9-4mjqf,Uid:f4e01981-c592-4b6b-a285-4046cf8c68c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763194051876600,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.po
d.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,k8s-app: metrics-server,pod-template-hash: 8988944d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:06:33.439380581Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a4cede15-f6e5-4422-a61f-260751693d94,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763193874846541,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T23:06:33.257584750Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-mtm8z,Uid:d8f0df8d-c410-42be-8666-0163180a0538,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763187764002581,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:06:27.154830434Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&PodSandboxMetadata{Name:kube-proxy-cg5sj,Uid:ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763187259512841,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:06:26.936009637Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&PodSandboxMetadata{Name:etcd-addons-517040,Uid:202f95e6f816d20eb9ce27dea34ed92b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763176395400133,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.72:2379,kubernetes.io/config.hash: 202f95e6f816d20eb9ce27dea34ed92b,kubernetes.io/config.seen: 2024-08-15T23:06:15.884090846Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-517040,Uid:cf72ded37e150ae0b29e520797537348,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt
:1723763176376999713,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cf72ded37e150ae0b29e520797537348,kubernetes.io/config.seen: 2024-08-15T23:06:15.884096965Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-517040,Uid:114b1fdcb0e22b9a92ce8b83728b0267,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763176375575162,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,tier: control-plan
e,},Annotations:map[string]string{kubernetes.io/config.hash: 114b1fdcb0e22b9a92ce8b83728b0267,kubernetes.io/config.seen: 2024-08-15T23:06:15.884096132Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-517040,Uid:2c72ec0755a8a97da3644a1b805d7ac6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723763176371704580,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.72:8443,kubernetes.io/config.hash: 2c72ec0755a8a97da3644a1b805d7ac6,kubernetes.io/config.seen: 2024-08-15T23:06:15.884095034Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="
otel-collector/interceptors.go:74" id=8cfebceb-65de-4010-adf4-c85cb6d1da04 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.641881010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2b1d6c4-d27e-4bd6-99ed-89fe59a3b701 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.641956991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2b1d6c4-d27e-4bd6-99ed-89fe59a3b701 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:13:24 addons-517040 crio[686]: time="2024-08-15 23:13:24.642326705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:587d41a9670abbcb893c28180eb87709353a9042620aede43cce0d4211917757,PodSandboxId:1481e58423b29e2a8a2c6284fab05d8808d00d4f57cee1ed94f5bdbd08ce1972,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723763483744240081,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bxccf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 633a66a4-e3b2-442f-8b09-ab0c395605df,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7aaca643dfd9b8aa29f0cb69a7b703b8a0616b2bd0b0f757450625ea7a29456,PodSandboxId:00fb62481fea5491a7ae7a30917dd3a39960f636ec1e0daa293d907625668f4a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723763369708269617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lw8lr,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 81da26ef-ec50-4d25-9e68-5daf93bbc089,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee7004b42faa7de00787141893fa64b6dac5c9e7523e014b84096f5b32b7bf,PodSandboxId:8f87c7ca4b22be89a95d0e0a38c79679d06b1e8399e9765fc59b8b308f76794e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723763342415725772,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,
io.kubernetes.pod.uid: 5c0b5079-ac0c-4418-9904-70626aa5e8a0,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e94101b907952596fe98baa590eae3d59c7b0a9b547ff2676641e96dc7bfcd,PodSandboxId:48dbf8ac51d762c635548965a7459cb12e26e9f9ca6cab9dd574a27bd505e357,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723763272566074617,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a2fa3b2-791e-48ef-be92-888357fe9cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:676bba24daba93ea7fff4302bf45bc176524315dfa6ffdb45a4c8ce41f13738c,PodSandboxId:9325a6cd6715f4699712eb40c9f5016898743a5a47ce3c18e24f5bc3512b05aa,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723763232309967707,Labels:map[string]string{io.kubernetes.container.name: metrics-s
erver,io.kubernetes.pod.name: metrics-server-8988944d9-4mjqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e01981-c592-4b6b-a285-4046cf8c68c0,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08,PodSandboxId:34f0ae38ae64119934e40f276eb62b021909c4b0bb33e8285ecfe11900f0cb6f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cr
eatedAt:1723763194859223677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4cede15-f6e5-4422-a61f-260751693d94,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3,PodSandboxId:328c613b14858d6b618687563f5afcf9241ac637e422a95255a6a96fa270d615,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723763190463277
779,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mtm8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f0df8d-c410-42be-8666-0163180a0538,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a,PodSandboxId:a6a31bca98aa383199539ffcb569c2e2143c8bee02be45f4c8f360a470aa0097,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6
494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723763187945446864,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cg5sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede8c3a9-8c4a-44a9-b8d9-6db190ceae87,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918,PodSandboxId:09e7916c53c75c07337bae6ed869a2c8eebdd646c28ceda32e2c778a5fdc6874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723763176683271176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf72ded37e150ae0b29e520797537348,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6,PodSandboxId:9e717d3a8c79c8eb7656ea1ffc869d6b1bc71d8a480efd1fcf7365c4857065b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723763176566199563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202f95e6f816d20eb9ce27dea34ed92b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169,PodSandboxId:45fac553f83f4d33698d06478d7b8fb9336318b016fe2911b6ab9b266051bf87,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:04
5733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723763176597813746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 114b1fdcb0e22b9a92ce8b83728b0267,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03,PodSandboxId:4eb36ed0d90ef1f61481dcbae7c4c44680ed193f586ada40657fdc671414e89e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
04f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723763176518253342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-517040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c72ec0755a8a97da3644a1b805d7ac6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2b1d6c4-d27e-4bd6-99ed-89fe59a3b701 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	587d41a9670ab       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   1481e58423b29       hello-world-app-55bf9c44b4-bxccf
	d7aaca643dfd9       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                   3 minutes ago       Running             headlamp                  0                   00fb62481fea5       headlamp-57fb76fcdb-lw8lr
	15ee7004b42fa       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   8f87c7ca4b22b       nginx
	14e94101b9079       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   48dbf8ac51d76       busybox
	676bba24daba9       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   9325a6cd6715f       metrics-server-8988944d9-4mjqf
	e21f3d503431c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        6 minutes ago       Running             storage-provisioner       0                   34f0ae38ae641       storage-provisioner
	9cc0790a4dd8a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        6 minutes ago       Running             coredns                   0                   328c613b14858       coredns-6f6b679f8f-mtm8z
	783cc30d2dd7d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        6 minutes ago       Running             kube-proxy                0                   a6a31bca98aa3       kube-proxy-cg5sj
	9903cff7350a8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   09e7916c53c75       kube-scheduler-addons-517040
	3e11626924865       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   45fac553f83f4       kube-controller-manager-addons-517040
	02f3110373f4a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   9e717d3a8c79c       etcd-addons-517040
	0a061e44f1eb6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   4eb36ed0d90ef       kube-apiserver-addons-517040
	
	
	==> coredns [9cc0790a4dd8a3ce9bf54a6669dc82c0e3a6e1706d0ce2443202fae3ebe312d3] <==
	[INFO] 10.244.0.8:34968 - 45637 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156012s
	[INFO] 10.244.0.8:54248 - 30794 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125144s
	[INFO] 10.244.0.8:54248 - 15940 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068837s
	[INFO] 10.244.0.8:36658 - 37101 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072349s
	[INFO] 10.244.0.8:36658 - 52207 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050756s
	[INFO] 10.244.0.8:53860 - 30569 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141586s
	[INFO] 10.244.0.8:53860 - 47208 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086933s
	[INFO] 10.244.0.8:51072 - 49197 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000221128s
	[INFO] 10.244.0.8:51072 - 48944 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121617s
	[INFO] 10.244.0.8:58645 - 4884 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041041s
	[INFO] 10.244.0.8:58645 - 49162 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122933s
	[INFO] 10.244.0.8:46833 - 5940 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034973s
	[INFO] 10.244.0.8:46833 - 28982 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000127592s
	[INFO] 10.244.0.8:52695 - 57356 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000296s
	[INFO] 10.244.0.8:52695 - 15374 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121454s
	[INFO] 10.244.0.22:51658 - 56784 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000646471s
	[INFO] 10.244.0.22:55605 - 56632 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000374544s
	[INFO] 10.244.0.22:42138 - 60930 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100515s
	[INFO] 10.244.0.22:34345 - 30793 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000067425s
	[INFO] 10.244.0.22:39572 - 52722 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063156s
	[INFO] 10.244.0.22:51098 - 42197 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000049731s
	[INFO] 10.244.0.22:56710 - 6424 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000788608s
	[INFO] 10.244.0.22:51087 - 23101 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000560135s
	[INFO] 10.244.0.24:39868 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000291488s
	[INFO] 10.244.0.24:46146 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111083s
	
	
	==> describe nodes <==
	Name:               addons-517040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-517040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=addons-517040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T23_06_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-517040
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:06:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-517040
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:13:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:11:59 +0000   Thu, 15 Aug 2024 23:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:11:59 +0000   Thu, 15 Aug 2024 23:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:11:59 +0000   Thu, 15 Aug 2024 23:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:11:59 +0000   Thu, 15 Aug 2024 23:06:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    addons-517040
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 36e7519d3eca490ea4a9a1ff050606a7
	  System UUID:                36e7519d-3eca-490e-a4a9-a1ff050606a7
	  Boot ID:                    028cdf2c-7fe5-4c84-846c-ca06f7b1a090
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  default                     hello-world-app-55bf9c44b4-bxccf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  headlamp                    headlamp-57fb76fcdb-lw8lr                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-6f6b679f8f-mtm8z                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m57s
	  kube-system                 etcd-addons-517040                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m2s
	  kube-system                 kube-apiserver-addons-517040             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 kube-controller-manager-addons-517040    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 kube-proxy-cg5sj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  kube-system                 kube-scheduler-addons-517040             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 metrics-server-8988944d9-4mjqf           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         6m51s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m55s  kube-proxy       
	  Normal  Starting                 7m2s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m2s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m2s   kubelet          Node addons-517040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m2s   kubelet          Node addons-517040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m2s   kubelet          Node addons-517040 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m1s   kubelet          Node addons-517040 status is now: NodeReady
	  Normal  RegisteredNode           6m58s  node-controller  Node addons-517040 event: Registered Node addons-517040 in Controller
	
	
	==> dmesg <==
	[  +5.004085] kauditd_printk_skb: 113 callbacks suppressed
	[  +8.784335] kauditd_printk_skb: 97 callbacks suppressed
	[ +11.605210] kauditd_printk_skb: 1 callbacks suppressed
	[Aug15 23:07] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.661497] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.778386] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.175528] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.095303] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.385077] kauditd_printk_skb: 71 callbacks suppressed
	[  +6.531494] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.307334] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.054538] kauditd_printk_skb: 55 callbacks suppressed
	[  +9.699595] kauditd_printk_skb: 8 callbacks suppressed
	[Aug15 23:08] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.064700] kauditd_printk_skb: 24 callbacks suppressed
	[ +13.176090] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.018946] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.638132] kauditd_printk_skb: 71 callbacks suppressed
	[  +5.177872] kauditd_printk_skb: 41 callbacks suppressed
	[ +11.139351] kauditd_printk_skb: 11 callbacks suppressed
	[Aug15 23:09] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.298176] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.680984] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.683028] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 23:11] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [02f3110373f4a04d5608f4604132e0d0ae16718556823f50d374e3c9e3df20e6] <==
	{"level":"warn","ts":"2024-08-15T23:07:44.778322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.946041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-15T23:07:44.778337Z","caller":"traceutil/trace.go:171","msg":"trace[1995727999] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1115; }","duration":"196.96329ms","start":"2024-08-15T23:07:44.581368Z","end":"2024-08-15T23:07:44.778332Z","steps":["trace[1995727999] 'agreement among raft nodes before linearized reading'  (duration: 196.936008ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T23:07:54.316199Z","caller":"traceutil/trace.go:171","msg":"trace[1322011620] linearizableReadLoop","detail":"{readStateIndex:1216; appliedIndex:1215; }","duration":"279.336868ms","start":"2024-08-15T23:07:54.036846Z","end":"2024-08-15T23:07:54.316183Z","steps":["trace[1322011620] 'read index received'  (duration: 279.159992ms)","trace[1322011620] 'applied index is now lower than readState.Index'  (duration: 176.026µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T23:07:54.316309Z","caller":"traceutil/trace.go:171","msg":"trace[1359792755] transaction","detail":"{read_only:false; response_revision:1184; number_of_response:1; }","duration":"333.988133ms","start":"2024-08-15T23:07:53.982315Z","end":"2024-08-15T23:07:54.316303Z","steps":["trace[1359792755] 'process raft request'  (duration: 333.747168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:07:54.316416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:07:53.982296Z","time spent":"334.031256ms","remote":"127.0.0.1:41854","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":11025,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/addons-517040\" mod_revision:1078 > success:<request_put:<key:\"/registry/minions/addons-517040\" value_size:10986 >> failure:<request_range:<key:\"/registry/minions/addons-517040\" > >"}
	{"level":"warn","ts":"2024-08-15T23:07:54.316558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.710935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-15T23:07:54.316684Z","caller":"traceutil/trace.go:171","msg":"trace[1498191755] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1184; }","duration":"279.83278ms","start":"2024-08-15T23:07:54.036842Z","end":"2024-08-15T23:07:54.316675Z","steps":["trace[1498191755] 'agreement among raft nodes before linearized reading'  (duration: 279.659067ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:07:54.316927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.109938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-8988944d9-4mjqf.17ec098739334263\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-08-15T23:07:54.316965Z","caller":"traceutil/trace.go:171","msg":"trace[691200612] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-8988944d9-4mjqf.17ec098739334263; range_end:; response_count:1; response_revision:1184; }","duration":"263.150274ms","start":"2024-08-15T23:07:54.053807Z","end":"2024-08-15T23:07:54.316958Z","steps":["trace[691200612] 'agreement among raft nodes before linearized reading'  (duration: 263.069128ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:07:54.317080Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.810284ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T23:07:54.317113Z","caller":"traceutil/trace.go:171","msg":"trace[535716717] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1184; }","duration":"136.844217ms","start":"2024-08-15T23:07:54.180264Z","end":"2024-08-15T23:07:54.317108Z","steps":["trace[535716717] 'agreement among raft nodes before linearized reading'  (duration: 136.803494ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:07:54.317800Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.465505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T23:07:54.320238Z","caller":"traceutil/trace.go:171","msg":"trace[2102502889] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1184; }","duration":"169.900128ms","start":"2024-08-15T23:07:54.150323Z","end":"2024-08-15T23:07:54.320223Z","steps":["trace[2102502889] 'agreement among raft nodes before linearized reading'  (duration: 167.445136ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T23:08:34.696425Z","caller":"traceutil/trace.go:171","msg":"trace[433513394] transaction","detail":"{read_only:false; response_revision:1388; number_of_response:1; }","duration":"154.81689ms","start":"2024-08-15T23:08:34.541588Z","end":"2024-08-15T23:08:34.696404Z","steps":["trace[433513394] 'process raft request'  (duration: 154.506796ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:08:37.270866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.032721ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17902812752124459763 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/test-pvc\" mod_revision:1411 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/test-pvc\" value_size:997 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/test-pvc\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T23:08:37.270955Z","caller":"traceutil/trace.go:171","msg":"trace[803499199] linearizableReadLoop","detail":"{readStateIndex:1460; appliedIndex:1459; }","duration":"330.596502ms","start":"2024-08-15T23:08:36.940347Z","end":"2024-08-15T23:08:37.270944Z","steps":["trace[803499199] 'read index received'  (duration: 24.447528ms)","trace[803499199] 'applied index is now lower than readState.Index'  (duration: 306.148178ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T23:08:37.271154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.830774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T23:08:37.271179Z","caller":"traceutil/trace.go:171","msg":"trace[1311152874] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1415; }","duration":"330.860022ms","start":"2024-08-15T23:08:36.940311Z","end":"2024-08-15T23:08:37.271171Z","steps":["trace[1311152874] 'agreement among raft nodes before linearized reading'  (duration: 330.805732ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:08:37.271205Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:08:36.940268Z","time spent":"330.929918ms","remote":"127.0.0.1:54048","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-15T23:08:37.271283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.380406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1069"}
	{"level":"info","ts":"2024-08-15T23:08:37.271312Z","caller":"traceutil/trace.go:171","msg":"trace[1229371035] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"343.84108ms","start":"2024-08-15T23:08:36.927464Z","end":"2024-08-15T23:08:37.271306Z","steps":["trace[1229371035] 'process raft request'  (duration: 37.264457ms)","trace[1229371035] 'compare'  (duration: 305.62383ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T23:08:37.271318Z","caller":"traceutil/trace.go:171","msg":"trace[1728528703] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1416; }","duration":"265.421352ms","start":"2024-08-15T23:08:37.005889Z","end":"2024-08-15T23:08:37.271310Z","steps":["trace[1728528703] 'agreement among raft nodes before linearized reading'  (duration: 265.342615ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T23:08:37.271360Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T23:08:36.927436Z","time spent":"343.895015ms","remote":"127.0.0.1:41840","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1054,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/test-pvc\" mod_revision:1411 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/test-pvc\" value_size:997 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/test-pvc\" > >"}
	{"level":"info","ts":"2024-08-15T23:08:37.271740Z","caller":"traceutil/trace.go:171","msg":"trace[1436630512] transaction","detail":"{read_only:false; response_revision:1416; number_of_response:1; }","duration":"265.606163ms","start":"2024-08-15T23:08:37.006127Z","end":"2024-08-15T23:08:37.271733Z","steps":["trace[1436630512] 'process raft request'  (duration: 265.050164ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T23:09:01.720476Z","caller":"traceutil/trace.go:171","msg":"trace[1960795402] transaction","detail":"{read_only:false; response_revision:1629; number_of_response:1; }","duration":"141.140362ms","start":"2024-08-15T23:09:01.576454Z","end":"2024-08-15T23:09:01.717595Z","steps":["trace[1960795402] 'process raft request'  (duration: 140.77956ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:13:25 up 7 min,  0 users,  load average: 0.18, 0.79, 0.55
	Linux addons-517040 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0a061e44f1eb6ee44de69311eb7be29c749442835f1dc5816571f9528e289a03] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0815 23:08:14.090870       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.106.174:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.106.174:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.106.174:443: connect: connection refused" logger="UnhandledError"
	E0815 23:08:14.098850       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.106.174:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.106.174:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.106.174:443: connect: connection refused" logger="UnhandledError"
	I0815 23:08:14.162159       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0815 23:08:26.103966       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 23:08:27.146246       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0815 23:08:39.804435       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.72:8443->10.244.0.26:37712: read: connection reset by peer
	I0815 23:08:44.929311       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 23:08:59.420323       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 23:08:59.640432       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.216.48"}
	E0815 23:09:03.503871       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0815 23:09:17.923924       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 23:09:17.924089       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 23:09:17.945790       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 23:09:17.945848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 23:09:17.988450       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 23:09:17.988554       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 23:09:18.011455       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 23:09:18.011508       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0815 23:09:19.011814       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0815 23:09:19.108915       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0815 23:09:19.110754       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0815 23:09:25.836671       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.189.90"}
	I0815 23:11:22.342434       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.126.185"}
	
	
	==> kube-controller-manager [3e11626924865700a06adcf66823e5a631298f57d7c638920fd943b271d71169] <==
	W0815 23:11:26.483098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:11:26.483250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:11:28.984263       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:11:28.984346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 23:11:34.873881       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0815 23:11:59.737585       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-517040"
	W0815 23:12:06.899475       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:12:06.899558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:12:13.648338       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:12:13.648482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:12:14.634882       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:12:14.634931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:12:22.766714       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:12:22.766780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:12:50.369831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:12:50.369956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:12:56.753465       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:12:56.753530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:13:00.541101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:13:00.541257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:13:15.034049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:13:15.034120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 23:13:22.749116       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 23:13:22.749263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 23:13:23.604074       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="14.338µs"
	
	
	==> kube-proxy [783cc30d2dd7dfde7c2063f1718bcf546876d56284a91207405b9dea6154ff5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:06:28.863372       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:06:28.885863       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.72"]
	E0815 23:06:28.885965       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:06:28.990396       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:06:28.990423       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:06:28.990450       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:06:28.997917       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:06:28.998279       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:06:28.998294       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:06:28.999719       1 config.go:197] "Starting service config controller"
	I0815 23:06:28.999745       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:06:28.999767       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:06:28.999771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:06:29.000305       1 config.go:326] "Starting node config controller"
	I0815 23:06:29.000313       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:06:29.101242       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:06:29.101285       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:06:29.101369       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [9903cff7350a81e95307623e84193f0565dfbbc847a870a81579ba000ddee918] <==
	W0815 23:06:19.320803       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 23:06:19.320841       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 23:06:20.164865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 23:06:20.164974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.192809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 23:06:20.192865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.262127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 23:06:20.262562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.301071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 23:06:20.301184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.408576       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 23:06:20.408689       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 23:06:20.417271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 23:06:20.417348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.422576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 23:06:20.422668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.437531       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 23:06:20.438687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.587120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 23:06:20.587765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.613189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 23:06:20.613411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:06:20.631953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 23:06:20.632176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0815 23:06:23.213720       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 23:12:22 addons-517040 kubelet[1239]: I0815 23:12:22.836940    1239 scope.go:117] "RemoveContainer" containerID="8320e2c19284ee22b42e0d497623fb5776ceb46aace42caac83e4a90fd3bf456"
	Aug 15 23:12:32 addons-517040 kubelet[1239]: E0815 23:12:32.568790    1239 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763552568364338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:12:32 addons-517040 kubelet[1239]: E0815 23:12:32.568835    1239 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763552568364338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:12:42 addons-517040 kubelet[1239]: E0815 23:12:42.571238    1239 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763562570821555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:12:42 addons-517040 kubelet[1239]: E0815 23:12:42.571327    1239 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763562570821555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:12:44 addons-517040 kubelet[1239]: I0815 23:12:44.223052    1239 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 23:12:52 addons-517040 kubelet[1239]: E0815 23:12:52.574289    1239 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763572573752212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:12:52 addons-517040 kubelet[1239]: E0815 23:12:52.574338    1239 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763572573752212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:13:02 addons-517040 kubelet[1239]: E0815 23:13:02.577199    1239 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763582576779231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:13:02 addons-517040 kubelet[1239]: E0815 23:13:02.577247    1239 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763582576779231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:13:12 addons-517040 kubelet[1239]: E0815 23:13:12.584248    1239 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763592580218821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:13:12 addons-517040 kubelet[1239]: E0815 23:13:12.584321    1239 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763592580218821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:13:22 addons-517040 kubelet[1239]: E0815 23:13:22.251993    1239 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:13:22 addons-517040 kubelet[1239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:13:22 addons-517040 kubelet[1239]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:13:22 addons-517040 kubelet[1239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:13:22 addons-517040 kubelet[1239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:13:22 addons-517040 kubelet[1239]: E0815 23:13:22.586920    1239 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763602586480262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:13:22 addons-517040 kubelet[1239]: E0815 23:13:22.586963    1239 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723763602586480262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590613,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:13:25 addons-517040 kubelet[1239]: I0815 23:13:25.080006    1239 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f4e01981-c592-4b6b-a285-4046cf8c68c0-tmp-dir\") pod \"f4e01981-c592-4b6b-a285-4046cf8c68c0\" (UID: \"f4e01981-c592-4b6b-a285-4046cf8c68c0\") "
	Aug 15 23:13:25 addons-517040 kubelet[1239]: I0815 23:13:25.080055    1239 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsh89\" (UniqueName: \"kubernetes.io/projected/f4e01981-c592-4b6b-a285-4046cf8c68c0-kube-api-access-rsh89\") pod \"f4e01981-c592-4b6b-a285-4046cf8c68c0\" (UID: \"f4e01981-c592-4b6b-a285-4046cf8c68c0\") "
	Aug 15 23:13:25 addons-517040 kubelet[1239]: I0815 23:13:25.080420    1239 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e01981-c592-4b6b-a285-4046cf8c68c0-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f4e01981-c592-4b6b-a285-4046cf8c68c0" (UID: "f4e01981-c592-4b6b-a285-4046cf8c68c0"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 15 23:13:25 addons-517040 kubelet[1239]: I0815 23:13:25.087832    1239 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e01981-c592-4b6b-a285-4046cf8c68c0-kube-api-access-rsh89" (OuterVolumeSpecName: "kube-api-access-rsh89") pod "f4e01981-c592-4b6b-a285-4046cf8c68c0" (UID: "f4e01981-c592-4b6b-a285-4046cf8c68c0"). InnerVolumeSpecName "kube-api-access-rsh89". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 23:13:25 addons-517040 kubelet[1239]: I0815 23:13:25.180799    1239 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rsh89\" (UniqueName: \"kubernetes.io/projected/f4e01981-c592-4b6b-a285-4046cf8c68c0-kube-api-access-rsh89\") on node \"addons-517040\" DevicePath \"\""
	Aug 15 23:13:25 addons-517040 kubelet[1239]: I0815 23:13:25.180842    1239 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f4e01981-c592-4b6b-a285-4046cf8c68c0-tmp-dir\") on node \"addons-517040\" DevicePath \"\""
	
	
	==> storage-provisioner [e21f3d503431c244b8e05a031d6474130a9e960768e834fbe91fc3b94e3fca08] <==
	I0815 23:06:35.923009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 23:06:36.046981       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 23:06:36.047119       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 23:06:36.307905       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 23:06:36.308156       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-517040_4256da64-e71c-4807-97e1-ceb3c1645eca!
	I0815 23:06:36.309322       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8ed2cda5-fb42-41c6-9a9f-6f4a4762f63e", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-517040_4256da64-e71c-4807-97e1-ceb3c1645eca became leader
	I0815 23:06:36.408319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-517040_4256da64-e71c-4807-97e1-ceb3c1645eca!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-517040 -n addons-517040
helpers_test.go:261: (dbg) Run:  kubectl --context addons-517040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (306.23s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-517040
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-517040: exit status 82 (2m0.45403393s)

                                                
                                                
-- stdout --
	* Stopping node "addons-517040"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-517040" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-517040
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-517040: exit status 11 (21.548797327s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.72:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-517040" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-517040
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-517040: exit status 11 (6.143576037s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.72:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-517040" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-517040
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-517040: exit status 11 (6.143356675s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.72:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-517040" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image rm kicbase/echo-server:functional-629421 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 image rm kicbase/echo-server:functional-629421 --alsologtostderr: (2.572521973s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls
functional_test.go:403: expected "kicbase/echo-server:functional-629421" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (2.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 node stop m02 -v=7 --alsologtostderr
E0815 23:25:14.294321   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:25:34.775796   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:26:15.738162   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.467124958s)

                                                
                                                
-- stdout --
	* Stopping node "ha-175414-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:25:08.137551   34674 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:25:08.137817   34674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:25:08.137827   34674 out.go:358] Setting ErrFile to fd 2...
	I0815 23:25:08.137831   34674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:25:08.138022   34674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:25:08.138260   34674 mustload.go:65] Loading cluster: ha-175414
	I0815 23:25:08.138608   34674 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:25:08.138622   34674 stop.go:39] StopHost: ha-175414-m02
	I0815 23:25:08.138948   34674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:25:08.138982   34674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:25:08.153969   34674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0815 23:25:08.154463   34674 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:25:08.155058   34674 main.go:141] libmachine: Using API Version  1
	I0815 23:25:08.155082   34674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:25:08.155430   34674 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:25:08.157553   34674 out.go:177] * Stopping node "ha-175414-m02"  ...
	I0815 23:25:08.158733   34674 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 23:25:08.158758   34674 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:25:08.158965   34674 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 23:25:08.158987   34674 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:25:08.161531   34674 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:25:08.161937   34674 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:25:08.161964   34674 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:25:08.162075   34674 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:25:08.162230   34674 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:25:08.162393   34674 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:25:08.162521   34674 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:25:08.253436   34674 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 23:25:08.311881   34674 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 23:25:08.368689   34674 main.go:141] libmachine: Stopping "ha-175414-m02"...
	I0815 23:25:08.368719   34674 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:25:08.370295   34674 main.go:141] libmachine: (ha-175414-m02) Calling .Stop
	I0815 23:25:08.373867   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 0/120
	I0815 23:25:09.375506   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 1/120
	I0815 23:25:10.376720   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 2/120
	I0815 23:25:11.378005   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 3/120
	I0815 23:25:12.380309   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 4/120
	I0815 23:25:13.381855   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 5/120
	I0815 23:25:14.383114   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 6/120
	I0815 23:25:15.384314   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 7/120
	I0815 23:25:16.386520   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 8/120
	I0815 23:25:17.387917   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 9/120
	I0815 23:25:18.390047   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 10/120
	I0815 23:25:19.391345   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 11/120
	I0815 23:25:20.392712   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 12/120
	I0815 23:25:21.394277   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 13/120
	I0815 23:25:22.395502   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 14/120
	I0815 23:25:23.396997   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 15/120
	I0815 23:25:24.398847   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 16/120
	I0815 23:25:25.400420   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 17/120
	I0815 23:25:26.401683   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 18/120
	I0815 23:25:27.403232   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 19/120
	I0815 23:25:28.405393   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 20/120
	I0815 23:25:29.406639   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 21/120
	I0815 23:25:30.408483   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 22/120
	I0815 23:25:31.409760   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 23/120
	I0815 23:25:32.411094   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 24/120
	I0815 23:25:33.413485   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 25/120
	I0815 23:25:34.415223   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 26/120
	I0815 23:25:35.416829   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 27/120
	I0815 23:25:36.418088   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 28/120
	I0815 23:25:37.420293   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 29/120
	I0815 23:25:38.422359   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 30/120
	I0815 23:25:39.424308   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 31/120
	I0815 23:25:40.425684   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 32/120
	I0815 23:25:41.426849   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 33/120
	I0815 23:25:42.429258   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 34/120
	I0815 23:25:43.431124   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 35/120
	I0815 23:25:44.432700   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 36/120
	I0815 23:25:45.433911   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 37/120
	I0815 23:25:46.435209   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 38/120
	I0815 23:25:47.436755   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 39/120
	I0815 23:25:48.439024   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 40/120
	I0815 23:25:49.440269   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 41/120
	I0815 23:25:50.441574   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 42/120
	I0815 23:25:51.443779   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 43/120
	I0815 23:25:52.444984   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 44/120
	I0815 23:25:53.446221   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 45/120
	I0815 23:25:54.448186   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 46/120
	I0815 23:25:55.449489   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 47/120
	I0815 23:25:56.451807   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 48/120
	I0815 23:25:57.453017   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 49/120
	I0815 23:25:58.454958   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 50/120
	I0815 23:25:59.456166   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 51/120
	I0815 23:26:00.457670   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 52/120
	I0815 23:26:01.458895   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 53/120
	I0815 23:26:02.460089   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 54/120
	I0815 23:26:03.461707   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 55/120
	I0815 23:26:04.463211   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 56/120
	I0815 23:26:05.464322   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 57/120
	I0815 23:26:06.465593   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 58/120
	I0815 23:26:07.466844   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 59/120
	I0815 23:26:08.468869   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 60/120
	I0815 23:26:09.470189   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 61/120
	I0815 23:26:10.471628   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 62/120
	I0815 23:26:11.473107   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 63/120
	I0815 23:26:12.474382   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 64/120
	I0815 23:26:13.476100   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 65/120
	I0815 23:26:14.477353   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 66/120
	I0815 23:26:15.479448   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 67/120
	I0815 23:26:16.480750   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 68/120
	I0815 23:26:17.482268   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 69/120
	I0815 23:26:18.484287   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 70/120
	I0815 23:26:19.485434   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 71/120
	I0815 23:26:20.486800   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 72/120
	I0815 23:26:21.488211   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 73/120
	I0815 23:26:22.489477   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 74/120
	I0815 23:26:23.490883   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 75/120
	I0815 23:26:24.492719   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 76/120
	I0815 23:26:25.494493   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 77/120
	I0815 23:26:26.495857   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 78/120
	I0815 23:26:27.497190   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 79/120
	I0815 23:26:28.499222   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 80/120
	I0815 23:26:29.500570   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 81/120
	I0815 23:26:30.501904   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 82/120
	I0815 23:26:31.503224   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 83/120
	I0815 23:26:32.504592   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 84/120
	I0815 23:26:33.505991   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 85/120
	I0815 23:26:34.508135   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 86/120
	I0815 23:26:35.509285   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 87/120
	I0815 23:26:36.510750   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 88/120
	I0815 23:26:37.512419   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 89/120
	I0815 23:26:38.514419   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 90/120
	I0815 23:26:39.515853   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 91/120
	I0815 23:26:40.517384   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 92/120
	I0815 23:26:41.518852   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 93/120
	I0815 23:26:42.520279   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 94/120
	I0815 23:26:43.522276   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 95/120
	I0815 23:26:44.524157   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 96/120
	I0815 23:26:45.525806   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 97/120
	I0815 23:26:46.527093   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 98/120
	I0815 23:26:47.528535   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 99/120
	I0815 23:26:48.530778   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 100/120
	I0815 23:26:49.532286   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 101/120
	I0815 23:26:50.533766   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 102/120
	I0815 23:26:51.535331   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 103/120
	I0815 23:26:52.537110   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 104/120
	I0815 23:26:53.538963   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 105/120
	I0815 23:26:54.540402   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 106/120
	I0815 23:26:55.541996   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 107/120
	I0815 23:26:56.543299   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 108/120
	I0815 23:26:57.544766   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 109/120
	I0815 23:26:58.546163   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 110/120
	I0815 23:26:59.548304   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 111/120
	I0815 23:27:00.549591   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 112/120
	I0815 23:27:01.550819   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 113/120
	I0815 23:27:02.552379   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 114/120
	I0815 23:27:03.554222   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 115/120
	I0815 23:27:04.556252   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 116/120
	I0815 23:27:05.557532   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 117/120
	I0815 23:27:06.558888   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 118/120
	I0815 23:27:07.560546   34674 main.go:141] libmachine: (ha-175414-m02) Waiting for machine to stop 119/120
	I0815 23:27:08.561816   34674 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 23:27:08.562100   34674 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-175414 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 3 (19.014056714s)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:27:08.610247   35114 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:27:08.610394   35114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:08.610403   35114 out.go:358] Setting ErrFile to fd 2...
	I0815 23:27:08.610409   35114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:08.610613   35114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:27:08.610888   35114 out.go:352] Setting JSON to false
	I0815 23:27:08.610920   35114 mustload.go:65] Loading cluster: ha-175414
	I0815 23:27:08.611025   35114 notify.go:220] Checking for updates...
	I0815 23:27:08.611328   35114 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:27:08.611348   35114 status.go:255] checking status of ha-175414 ...
	I0815 23:27:08.611785   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:08.611849   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:08.634470   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0815 23:27:08.634972   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:08.635603   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:08.635629   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:08.636020   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:08.636266   35114 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:27:08.637898   35114 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:27:08.637917   35114 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:08.638207   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:08.638236   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:08.652795   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0815 23:27:08.653249   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:08.653711   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:08.653728   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:08.654072   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:08.654235   35114 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:27:08.656981   35114 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:08.657422   35114 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:08.657449   35114 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:08.657578   35114 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:08.657936   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:08.657977   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:08.672516   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0815 23:27:08.672917   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:08.673391   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:08.673413   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:08.673754   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:08.673952   35114 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:27:08.674174   35114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:08.674205   35114 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:27:08.676968   35114 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:08.677294   35114 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:08.677319   35114 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:08.677445   35114 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:27:08.677598   35114 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:27:08.677744   35114 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:27:08.677868   35114 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:27:08.765399   35114 ssh_runner.go:195] Run: systemctl --version
	I0815 23:27:08.772838   35114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:08.791956   35114 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:08.791988   35114 api_server.go:166] Checking apiserver status ...
	I0815 23:27:08.792020   35114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:08.808137   35114 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:27:08.821418   35114 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:08.821488   35114 ssh_runner.go:195] Run: ls
	I0815 23:27:08.826450   35114 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:08.831095   35114 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:08.831114   35114 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:27:08.831123   35114 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:08.831154   35114 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:27:08.831439   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:08.831470   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:08.847289   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0815 23:27:08.847701   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:08.848278   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:08.848295   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:08.848593   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:08.848780   35114 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:27:08.850344   35114 status.go:330] ha-175414-m02 host status = "Running" (err=<nil>)
	I0815 23:27:08.850356   35114 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:08.850624   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:08.850657   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:08.865132   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0815 23:27:08.865542   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:08.866027   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:08.866057   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:08.866423   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:08.866614   35114 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:27:08.869601   35114 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:08.870018   35114 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:08.870045   35114 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:08.870197   35114 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:08.870494   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:08.870526   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:08.885167   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I0815 23:27:08.885526   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:08.885955   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:08.885974   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:08.886253   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:08.886449   35114 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:27:08.886620   35114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:08.886638   35114 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:27:08.889275   35114 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:08.889720   35114 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:08.889743   35114 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:08.889920   35114 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:27:08.890069   35114 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:27:08.890224   35114 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:27:08.890394   35114 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	W0815 23:27:27.214047   35114 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:27:27.214168   35114 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E0815 23:27:27.214189   35114 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:27.214200   35114 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 23:27:27.214232   35114 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:27.214250   35114 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:27:27.214572   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:27.214627   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:27.230412   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0815 23:27:27.230843   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:27.231292   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:27.231310   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:27.231621   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:27.231869   35114 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:27:27.233371   35114 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:27:27.233387   35114 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:27.233687   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:27.233722   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:27.248316   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45657
	I0815 23:27:27.248834   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:27.249320   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:27.249338   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:27.249711   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:27.249956   35114 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:27:27.252832   35114 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:27.253224   35114 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:27.253253   35114 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:27.253457   35114 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:27.253815   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:27.253866   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:27.269591   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0815 23:27:27.270052   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:27.270535   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:27.270559   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:27.270839   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:27.271038   35114 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:27:27.271220   35114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:27.271240   35114 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:27:27.274379   35114 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:27.274829   35114 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:27.274857   35114 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:27.275012   35114 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:27:27.275199   35114 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:27:27.275366   35114 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:27:27.275508   35114 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:27:27.355336   35114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:27.375169   35114 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:27.375205   35114 api_server.go:166] Checking apiserver status ...
	I0815 23:27:27.375260   35114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:27.395842   35114 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:27:27.406378   35114 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:27.406445   35114 ssh_runner.go:195] Run: ls
	I0815 23:27:27.411313   35114 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:27.417649   35114 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:27.417672   35114 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:27:27.417681   35114 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:27.417694   35114 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:27:27.418101   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:27.418149   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:27.432700   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I0815 23:27:27.433167   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:27.433674   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:27.433692   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:27.433965   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:27.434137   35114 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:27:27.435629   35114 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:27:27.435655   35114 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:27.435986   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:27.436020   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:27.450456   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37025
	I0815 23:27:27.450856   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:27.451252   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:27.451271   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:27.451571   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:27.451744   35114 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:27:27.454384   35114 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:27.454736   35114 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:27.454769   35114 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:27.454956   35114 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:27.455242   35114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:27.455273   35114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:27.470505   35114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0815 23:27:27.470870   35114 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:27.471295   35114 main.go:141] libmachine: Using API Version  1
	I0815 23:27:27.471312   35114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:27.471656   35114 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:27.471849   35114 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:27:27.472023   35114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:27.472039   35114 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:27:27.474537   35114 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:27.474983   35114 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:27.475007   35114 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:27.475142   35114 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:27:27.475312   35114 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:27:27.475437   35114 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:27:27.475575   35114 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:27:27.558682   35114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:27.575329   35114 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-175414 -n ha-175414
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-175414 logs -n 25: (1.456166229s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414:/home/docker/cp-test_ha-175414-m03_ha-175414.txt                      |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414 sudo cat                                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414.txt                                |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m02:/home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m02 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04:/home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m04 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp testdata/cp-test.txt                                               | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414:/home/docker/cp-test_ha-175414-m04_ha-175414.txt                      |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414 sudo cat                                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414.txt                                |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m02:/home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m02 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03:/home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m03 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-175414 node stop m02 -v=7                                                    | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:20:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:20:39.132234   30687 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:20:39.132484   30687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:20:39.132492   30687 out.go:358] Setting ErrFile to fd 2...
	I0815 23:20:39.132496   30687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:20:39.132654   30687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:20:39.133199   30687 out.go:352] Setting JSON to false
	I0815 23:20:39.134115   30687 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3739,"bootTime":1723760300,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:20:39.134173   30687 start.go:139] virtualization: kvm guest
	I0815 23:20:39.136302   30687 out.go:177] * [ha-175414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:20:39.138076   30687 notify.go:220] Checking for updates...
	I0815 23:20:39.138101   30687 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:20:39.139349   30687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:20:39.140547   30687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:20:39.141831   30687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:20:39.143082   30687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:20:39.144296   30687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:20:39.145648   30687 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:20:39.180551   30687 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 23:20:39.181708   30687 start.go:297] selected driver: kvm2
	I0815 23:20:39.181730   30687 start.go:901] validating driver "kvm2" against <nil>
	I0815 23:20:39.181741   30687 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:20:39.182442   30687 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:20:39.182539   30687 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:20:39.197281   30687 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:20:39.197328   30687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 23:20:39.197558   30687 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:20:39.197627   30687 cni.go:84] Creating CNI manager for ""
	I0815 23:20:39.197642   30687 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0815 23:20:39.197650   30687 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 23:20:39.197711   30687 start.go:340] cluster config:
	{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0815 23:20:39.197828   30687 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:20:39.199692   30687 out.go:177] * Starting "ha-175414" primary control-plane node in "ha-175414" cluster
	I0815 23:20:39.201029   30687 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:20:39.201061   30687 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:20:39.201069   30687 cache.go:56] Caching tarball of preloaded images
	I0815 23:20:39.201155   30687 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:20:39.201171   30687 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:20:39.201495   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:20:39.201517   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json: {Name:mk6e3969a695f5334d0a96f3c5a2e62b2ca895a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:20:39.201679   30687 start.go:360] acquireMachinesLock for ha-175414: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:20:39.201714   30687 start.go:364] duration metric: took 19.572µs to acquireMachinesLock for "ha-175414"
	I0815 23:20:39.201736   30687 start.go:93] Provisioning new machine with config: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:20:39.201811   30687 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 23:20:39.203457   30687 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 23:20:39.203585   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:20:39.203629   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:20:39.217904   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0815 23:20:39.218312   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:20:39.218784   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:20:39.218803   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:20:39.219049   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:20:39.219227   30687 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:20:39.219382   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:20:39.219535   30687 start.go:159] libmachine.API.Create for "ha-175414" (driver="kvm2")
	I0815 23:20:39.219562   30687 client.go:168] LocalClient.Create starting
	I0815 23:20:39.219596   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0815 23:20:39.219628   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:20:39.219651   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:20:39.219703   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0815 23:20:39.219719   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:20:39.219737   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:20:39.219754   30687 main.go:141] libmachine: Running pre-create checks...
	I0815 23:20:39.219764   30687 main.go:141] libmachine: (ha-175414) Calling .PreCreateCheck
	I0815 23:20:39.220095   30687 main.go:141] libmachine: (ha-175414) Calling .GetConfigRaw
	I0815 23:20:39.220478   30687 main.go:141] libmachine: Creating machine...
	I0815 23:20:39.220490   30687 main.go:141] libmachine: (ha-175414) Calling .Create
	I0815 23:20:39.220616   30687 main.go:141] libmachine: (ha-175414) Creating KVM machine...
	I0815 23:20:39.221863   30687 main.go:141] libmachine: (ha-175414) DBG | found existing default KVM network
	I0815 23:20:39.222527   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.222381   30710 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0815 23:20:39.222556   30687 main.go:141] libmachine: (ha-175414) DBG | created network xml: 
	I0815 23:20:39.222569   30687 main.go:141] libmachine: (ha-175414) DBG | <network>
	I0815 23:20:39.222577   30687 main.go:141] libmachine: (ha-175414) DBG |   <name>mk-ha-175414</name>
	I0815 23:20:39.222586   30687 main.go:141] libmachine: (ha-175414) DBG |   <dns enable='no'/>
	I0815 23:20:39.222592   30687 main.go:141] libmachine: (ha-175414) DBG |   
	I0815 23:20:39.222602   30687 main.go:141] libmachine: (ha-175414) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 23:20:39.222608   30687 main.go:141] libmachine: (ha-175414) DBG |     <dhcp>
	I0815 23:20:39.222615   30687 main.go:141] libmachine: (ha-175414) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 23:20:39.222623   30687 main.go:141] libmachine: (ha-175414) DBG |     </dhcp>
	I0815 23:20:39.222631   30687 main.go:141] libmachine: (ha-175414) DBG |   </ip>
	I0815 23:20:39.222647   30687 main.go:141] libmachine: (ha-175414) DBG |   
	I0815 23:20:39.222658   30687 main.go:141] libmachine: (ha-175414) DBG | </network>
	I0815 23:20:39.222673   30687 main.go:141] libmachine: (ha-175414) DBG | 
	I0815 23:20:39.227788   30687 main.go:141] libmachine: (ha-175414) DBG | trying to create private KVM network mk-ha-175414 192.168.39.0/24...
	I0815 23:20:39.292857   30687 main.go:141] libmachine: (ha-175414) DBG | private KVM network mk-ha-175414 192.168.39.0/24 created
	I0815 23:20:39.292884   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.292810   30710 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:20:39.292983   30687 main.go:141] libmachine: (ha-175414) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414 ...
	I0815 23:20:39.293022   30687 main.go:141] libmachine: (ha-175414) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:20:39.293049   30687 main.go:141] libmachine: (ha-175414) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 23:20:39.534225   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.534136   30710 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa...
	I0815 23:20:39.626298   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.626192   30710 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/ha-175414.rawdisk...
	I0815 23:20:39.626322   30687 main.go:141] libmachine: (ha-175414) DBG | Writing magic tar header
	I0815 23:20:39.626332   30687 main.go:141] libmachine: (ha-175414) DBG | Writing SSH key tar header
	I0815 23:20:39.626339   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.626322   30710 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414 ...
	I0815 23:20:39.626440   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414
	I0815 23:20:39.626477   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414 (perms=drwx------)
	I0815 23:20:39.626485   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0815 23:20:39.626495   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:20:39.626500   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0815 23:20:39.626510   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 23:20:39.626516   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins
	I0815 23:20:39.626522   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home
	I0815 23:20:39.626528   30687 main.go:141] libmachine: (ha-175414) DBG | Skipping /home - not owner
	I0815 23:20:39.626541   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0815 23:20:39.626552   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0815 23:20:39.626573   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0815 23:20:39.626581   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 23:20:39.626590   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 23:20:39.626595   30687 main.go:141] libmachine: (ha-175414) Creating domain...
	I0815 23:20:39.627405   30687 main.go:141] libmachine: (ha-175414) define libvirt domain using xml: 
	I0815 23:20:39.627427   30687 main.go:141] libmachine: (ha-175414) <domain type='kvm'>
	I0815 23:20:39.627435   30687 main.go:141] libmachine: (ha-175414)   <name>ha-175414</name>
	I0815 23:20:39.627439   30687 main.go:141] libmachine: (ha-175414)   <memory unit='MiB'>2200</memory>
	I0815 23:20:39.627444   30687 main.go:141] libmachine: (ha-175414)   <vcpu>2</vcpu>
	I0815 23:20:39.627450   30687 main.go:141] libmachine: (ha-175414)   <features>
	I0815 23:20:39.627455   30687 main.go:141] libmachine: (ha-175414)     <acpi/>
	I0815 23:20:39.627459   30687 main.go:141] libmachine: (ha-175414)     <apic/>
	I0815 23:20:39.627473   30687 main.go:141] libmachine: (ha-175414)     <pae/>
	I0815 23:20:39.627481   30687 main.go:141] libmachine: (ha-175414)     
	I0815 23:20:39.627493   30687 main.go:141] libmachine: (ha-175414)   </features>
	I0815 23:20:39.627501   30687 main.go:141] libmachine: (ha-175414)   <cpu mode='host-passthrough'>
	I0815 23:20:39.627510   30687 main.go:141] libmachine: (ha-175414)   
	I0815 23:20:39.627517   30687 main.go:141] libmachine: (ha-175414)   </cpu>
	I0815 23:20:39.627523   30687 main.go:141] libmachine: (ha-175414)   <os>
	I0815 23:20:39.627534   30687 main.go:141] libmachine: (ha-175414)     <type>hvm</type>
	I0815 23:20:39.627542   30687 main.go:141] libmachine: (ha-175414)     <boot dev='cdrom'/>
	I0815 23:20:39.627546   30687 main.go:141] libmachine: (ha-175414)     <boot dev='hd'/>
	I0815 23:20:39.627551   30687 main.go:141] libmachine: (ha-175414)     <bootmenu enable='no'/>
	I0815 23:20:39.627558   30687 main.go:141] libmachine: (ha-175414)   </os>
	I0815 23:20:39.627563   30687 main.go:141] libmachine: (ha-175414)   <devices>
	I0815 23:20:39.627567   30687 main.go:141] libmachine: (ha-175414)     <disk type='file' device='cdrom'>
	I0815 23:20:39.627579   30687 main.go:141] libmachine: (ha-175414)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/boot2docker.iso'/>
	I0815 23:20:39.627592   30687 main.go:141] libmachine: (ha-175414)       <target dev='hdc' bus='scsi'/>
	I0815 23:20:39.627601   30687 main.go:141] libmachine: (ha-175414)       <readonly/>
	I0815 23:20:39.627607   30687 main.go:141] libmachine: (ha-175414)     </disk>
	I0815 23:20:39.627618   30687 main.go:141] libmachine: (ha-175414)     <disk type='file' device='disk'>
	I0815 23:20:39.627625   30687 main.go:141] libmachine: (ha-175414)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 23:20:39.627632   30687 main.go:141] libmachine: (ha-175414)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/ha-175414.rawdisk'/>
	I0815 23:20:39.627643   30687 main.go:141] libmachine: (ha-175414)       <target dev='hda' bus='virtio'/>
	I0815 23:20:39.627666   30687 main.go:141] libmachine: (ha-175414)     </disk>
	I0815 23:20:39.627689   30687 main.go:141] libmachine: (ha-175414)     <interface type='network'>
	I0815 23:20:39.627700   30687 main.go:141] libmachine: (ha-175414)       <source network='mk-ha-175414'/>
	I0815 23:20:39.627711   30687 main.go:141] libmachine: (ha-175414)       <model type='virtio'/>
	I0815 23:20:39.627720   30687 main.go:141] libmachine: (ha-175414)     </interface>
	I0815 23:20:39.627731   30687 main.go:141] libmachine: (ha-175414)     <interface type='network'>
	I0815 23:20:39.627744   30687 main.go:141] libmachine: (ha-175414)       <source network='default'/>
	I0815 23:20:39.627754   30687 main.go:141] libmachine: (ha-175414)       <model type='virtio'/>
	I0815 23:20:39.627780   30687 main.go:141] libmachine: (ha-175414)     </interface>
	I0815 23:20:39.627801   30687 main.go:141] libmachine: (ha-175414)     <serial type='pty'>
	I0815 23:20:39.627814   30687 main.go:141] libmachine: (ha-175414)       <target port='0'/>
	I0815 23:20:39.627828   30687 main.go:141] libmachine: (ha-175414)     </serial>
	I0815 23:20:39.627841   30687 main.go:141] libmachine: (ha-175414)     <console type='pty'>
	I0815 23:20:39.627854   30687 main.go:141] libmachine: (ha-175414)       <target type='serial' port='0'/>
	I0815 23:20:39.627867   30687 main.go:141] libmachine: (ha-175414)     </console>
	I0815 23:20:39.627878   30687 main.go:141] libmachine: (ha-175414)     <rng model='virtio'>
	I0815 23:20:39.627904   30687 main.go:141] libmachine: (ha-175414)       <backend model='random'>/dev/random</backend>
	I0815 23:20:39.627919   30687 main.go:141] libmachine: (ha-175414)     </rng>
	I0815 23:20:39.627930   30687 main.go:141] libmachine: (ha-175414)     
	I0815 23:20:39.627940   30687 main.go:141] libmachine: (ha-175414)     
	I0815 23:20:39.627949   30687 main.go:141] libmachine: (ha-175414)   </devices>
	I0815 23:20:39.627955   30687 main.go:141] libmachine: (ha-175414) </domain>
	I0815 23:20:39.627965   30687 main.go:141] libmachine: (ha-175414) 
	I0815 23:20:39.632318   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:03:37:1f in network default
	I0815 23:20:39.632914   30687 main.go:141] libmachine: (ha-175414) Ensuring networks are active...
	I0815 23:20:39.632944   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:39.633550   30687 main.go:141] libmachine: (ha-175414) Ensuring network default is active
	I0815 23:20:39.633879   30687 main.go:141] libmachine: (ha-175414) Ensuring network mk-ha-175414 is active
	I0815 23:20:39.634408   30687 main.go:141] libmachine: (ha-175414) Getting domain xml...
	I0815 23:20:39.635048   30687 main.go:141] libmachine: (ha-175414) Creating domain...
	I0815 23:20:40.841021   30687 main.go:141] libmachine: (ha-175414) Waiting to get IP...
	I0815 23:20:40.841732   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:40.842079   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:40.842100   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:40.842058   30710 retry.go:31] will retry after 195.088814ms: waiting for machine to come up
	I0815 23:20:41.038377   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:41.038675   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:41.038698   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:41.038627   30710 retry.go:31] will retry after 350.43297ms: waiting for machine to come up
	I0815 23:20:41.391114   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:41.391547   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:41.391574   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:41.391504   30710 retry.go:31] will retry after 346.192999ms: waiting for machine to come up
	I0815 23:20:41.738883   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:41.739310   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:41.739339   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:41.739259   30710 retry.go:31] will retry after 395.632919ms: waiting for machine to come up
	I0815 23:20:42.136722   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:42.137183   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:42.137211   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:42.137145   30710 retry.go:31] will retry after 640.154019ms: waiting for machine to come up
	I0815 23:20:42.779013   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:42.779527   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:42.779568   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:42.779489   30710 retry.go:31] will retry after 897.025784ms: waiting for machine to come up
	I0815 23:20:43.678800   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:43.679312   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:43.679358   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:43.679271   30710 retry.go:31] will retry after 1.071070056s: waiting for machine to come up
	I0815 23:20:44.752300   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:44.752783   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:44.752814   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:44.752732   30710 retry.go:31] will retry after 1.252527242s: waiting for machine to come up
	I0815 23:20:46.006923   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:46.007343   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:46.007369   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:46.007297   30710 retry.go:31] will retry after 1.860999961s: waiting for machine to come up
	I0815 23:20:47.870262   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:47.870687   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:47.870723   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:47.870649   30710 retry.go:31] will retry after 1.673749324s: waiting for machine to come up
	I0815 23:20:49.546472   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:49.546888   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:49.546915   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:49.546856   30710 retry.go:31] will retry after 1.873147128s: waiting for machine to come up
	I0815 23:20:51.423020   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:51.423549   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:51.423577   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:51.423500   30710 retry.go:31] will retry after 3.056668989s: waiting for machine to come up
	I0815 23:20:54.481416   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:54.481960   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:54.481982   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:54.481891   30710 retry.go:31] will retry after 4.021901294s: waiting for machine to come up
	I0815 23:20:58.507975   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:58.508455   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:58.508502   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:58.508428   30710 retry.go:31] will retry after 3.780383701s: waiting for machine to come up
	I0815 23:21:02.292116   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.292616   30687 main.go:141] libmachine: (ha-175414) Found IP for machine: 192.168.39.67
	I0815 23:21:02.292668   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has current primary IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.292682   30687 main.go:141] libmachine: (ha-175414) Reserving static IP address...
	I0815 23:21:02.293043   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find host DHCP lease matching {name: "ha-175414", mac: "52:54:00:f0:98:13", ip: "192.168.39.67"} in network mk-ha-175414
	I0815 23:21:02.363118   30687 main.go:141] libmachine: (ha-175414) Reserved static IP address: 192.168.39.67
	I0815 23:21:02.363144   30687 main.go:141] libmachine: (ha-175414) Waiting for SSH to be available...
	I0815 23:21:02.363160   30687 main.go:141] libmachine: (ha-175414) DBG | Getting to WaitForSSH function...
	I0815 23:21:02.365565   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.366680   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.366803   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.367398   30687 main.go:141] libmachine: (ha-175414) DBG | Using SSH client type: external
	I0815 23:21:02.367417   30687 main.go:141] libmachine: (ha-175414) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa (-rw-------)
	I0815 23:21:02.367461   30687 main.go:141] libmachine: (ha-175414) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:21:02.367486   30687 main.go:141] libmachine: (ha-175414) DBG | About to run SSH command:
	I0815 23:21:02.367520   30687 main.go:141] libmachine: (ha-175414) DBG | exit 0
	I0815 23:21:02.494052   30687 main.go:141] libmachine: (ha-175414) DBG | SSH cmd err, output: <nil>: 
	I0815 23:21:02.494294   30687 main.go:141] libmachine: (ha-175414) KVM machine creation complete!
	I0815 23:21:02.494680   30687 main.go:141] libmachine: (ha-175414) Calling .GetConfigRaw
	I0815 23:21:02.495185   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:02.495410   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:02.495586   30687 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 23:21:02.495599   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:02.496803   30687 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 23:21:02.496816   30687 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 23:21:02.496822   30687 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 23:21:02.496827   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.498916   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.499207   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.499244   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.499311   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:02.499491   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.499626   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.499772   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:02.499899   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:02.500112   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:02.500126   30687 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 23:21:02.605241   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:21:02.605269   30687 main.go:141] libmachine: Detecting the provisioner...
	I0815 23:21:02.605279   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.608064   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.608413   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.608440   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.608558   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:02.608751   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.608949   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.609112   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:02.609282   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:02.609441   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:02.609452   30687 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 23:21:02.718593   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 23:21:02.718654   30687 main.go:141] libmachine: found compatible host: buildroot
	I0815 23:21:02.718664   30687 main.go:141] libmachine: Provisioning with buildroot...
	I0815 23:21:02.718676   30687 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:21:02.718967   30687 buildroot.go:166] provisioning hostname "ha-175414"
	I0815 23:21:02.719001   30687 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:21:02.719221   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.721638   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.722011   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.722037   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.722188   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:02.722351   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.722490   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.722637   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:02.722812   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:02.722962   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:02.722973   30687 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-175414 && echo "ha-175414" | sudo tee /etc/hostname
	I0815 23:21:02.844709   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414
	
	I0815 23:21:02.844754   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.847473   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.847800   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.847829   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.847980   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:02.848180   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.848321   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.848427   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:02.848548   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:02.848729   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:02.848751   30687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-175414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-175414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-175414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:21:02.967096   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:21:02.967125   30687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:21:02.967185   30687 buildroot.go:174] setting up certificates
	I0815 23:21:02.967200   30687 provision.go:84] configureAuth start
	I0815 23:21:02.967220   30687 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:21:02.967510   30687 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:21:02.969990   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.970366   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.970388   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.970606   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.972920   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.973267   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.973295   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.973369   30687 provision.go:143] copyHostCerts
	I0815 23:21:02.973400   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:21:02.973485   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:21:02.973509   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:21:02.973575   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:21:02.973663   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:21:02.973682   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:21:02.973686   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:21:02.973735   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:21:02.973792   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:21:02.973814   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:21:02.973820   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:21:02.973864   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:21:02.973942   30687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.ha-175414 san=[127.0.0.1 192.168.39.67 ha-175414 localhost minikube]
	I0815 23:21:03.246553   30687 provision.go:177] copyRemoteCerts
	I0815 23:21:03.246613   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:21:03.246633   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.249195   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.249489   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.249518   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.249716   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.249960   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.250101   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.250212   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:03.332109   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:21:03.332191   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:21:03.357349   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:21:03.357427   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0815 23:21:03.382778   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:21:03.382852   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 23:21:03.407683   30687 provision.go:87] duration metric: took 440.469279ms to configureAuth
	I0815 23:21:03.407710   30687 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:21:03.407922   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:03.407991   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.410375   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.410696   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.410723   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.410927   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.411105   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.411264   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.411374   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.411505   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:03.411661   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:03.411676   30687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:21:03.684024   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:21:03.684057   30687 main.go:141] libmachine: Checking connection to Docker...
	I0815 23:21:03.684068   30687 main.go:141] libmachine: (ha-175414) Calling .GetURL
	I0815 23:21:03.685193   30687 main.go:141] libmachine: (ha-175414) DBG | Using libvirt version 6000000
	I0815 23:21:03.687439   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.687738   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.687759   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.687927   30687 main.go:141] libmachine: Docker is up and running!
	I0815 23:21:03.687942   30687 main.go:141] libmachine: Reticulating splines...
	I0815 23:21:03.687948   30687 client.go:171] duration metric: took 24.468376965s to LocalClient.Create
	I0815 23:21:03.687969   30687 start.go:167] duration metric: took 24.468433657s to libmachine.API.Create "ha-175414"
	I0815 23:21:03.687981   30687 start.go:293] postStartSetup for "ha-175414" (driver="kvm2")
	I0815 23:21:03.687995   30687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:21:03.688010   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.688257   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:21:03.688281   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.690410   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.690752   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.690780   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.690961   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.691120   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.691250   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.691380   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:03.778909   30687 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:21:03.783412   30687 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:21:03.783441   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:21:03.783510   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:21:03.783601   30687 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:21:03.783613   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:21:03.783733   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:21:03.794334   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:21:03.819021   30687 start.go:296] duration metric: took 131.025603ms for postStartSetup
	I0815 23:21:03.819066   30687 main.go:141] libmachine: (ha-175414) Calling .GetConfigRaw
	I0815 23:21:03.819613   30687 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:21:03.822089   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.822354   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.822373   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.822601   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:21:03.822776   30687 start.go:128] duration metric: took 24.620953921s to createHost
	I0815 23:21:03.822794   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.825003   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.825359   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.825390   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.825454   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.825626   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.826109   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.826269   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.826442   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:03.826614   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:03.826628   30687 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:21:03.934709   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764063.912105974
	
	I0815 23:21:03.934737   30687 fix.go:216] guest clock: 1723764063.912105974
	I0815 23:21:03.934745   30687 fix.go:229] Guest: 2024-08-15 23:21:03.912105974 +0000 UTC Remote: 2024-08-15 23:21:03.822784949 +0000 UTC m=+24.724050572 (delta=89.321025ms)
	I0815 23:21:03.934763   30687 fix.go:200] guest clock delta is within tolerance: 89.321025ms
	I0815 23:21:03.934768   30687 start.go:83] releasing machines lock for "ha-175414", held for 24.733043179s
	I0815 23:21:03.934785   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.935067   30687 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:21:03.937686   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.938050   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.938080   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.938226   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.938727   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.938908   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.938986   30687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:21:03.939029   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.939125   30687 ssh_runner.go:195] Run: cat /version.json
	I0815 23:21:03.939144   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.941471   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.941727   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.941805   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.941830   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.941937   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.942039   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.942060   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.942106   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.942302   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.942312   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.942490   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.942509   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:03.942657   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.942815   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:04.023345   30687 ssh_runner.go:195] Run: systemctl --version
	I0815 23:21:04.044213   30687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:21:04.211504   30687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:21:04.217481   30687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:21:04.217560   30687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:21:04.235510   30687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 23:21:04.235537   30687 start.go:495] detecting cgroup driver to use...
	I0815 23:21:04.235603   30687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:21:04.252899   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:21:04.267198   30687 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:21:04.267246   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:21:04.281265   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:21:04.295754   30687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:21:04.415851   30687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:21:04.572466   30687 docker.go:233] disabling docker service ...
	I0815 23:21:04.572529   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:21:04.586435   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:21:04.599790   30687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:21:04.721646   30687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:21:04.842009   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:21:04.856666   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:21:04.875455   30687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:21:04.875524   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.885652   30687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:21:04.885719   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.895820   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.906250   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.916710   30687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:21:04.927500   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.938716   30687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.956186   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.966841   30687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:21:04.976627   30687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 23:21:04.976691   30687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 23:21:04.989636   30687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:21:04.999689   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:21:05.114749   30687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:21:05.252784   30687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:21:05.252856   30687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:21:05.258037   30687 start.go:563] Will wait 60s for crictl version
	I0815 23:21:05.258101   30687 ssh_runner.go:195] Run: which crictl
	I0815 23:21:05.262019   30687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:21:05.310161   30687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:21:05.310242   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:21:05.338380   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:21:05.368390   30687 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:21:05.369453   30687 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:21:05.371970   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:05.372254   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:05.372280   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:05.372457   30687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:21:05.376620   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:21:05.390218   30687 kubeadm.go:883] updating cluster {Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 23:21:05.390313   30687 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:21:05.390363   30687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:21:05.426809   30687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 23:21:05.426888   30687 ssh_runner.go:195] Run: which lz4
	I0815 23:21:05.430910   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0815 23:21:05.431000   30687 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 23:21:05.435499   30687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 23:21:05.435524   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 23:21:06.815675   30687 crio.go:462] duration metric: took 1.384702615s to copy over tarball
	I0815 23:21:06.815754   30687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 23:21:08.869910   30687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.054131365s)
	I0815 23:21:08.869942   30687 crio.go:469] duration metric: took 2.054241253s to extract the tarball
	I0815 23:21:08.869949   30687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 23:21:08.907690   30687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:21:08.952823   30687 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:21:08.952841   30687 cache_images.go:84] Images are preloaded, skipping loading
	I0815 23:21:08.952848   30687 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.0 crio true true} ...
	I0815 23:21:08.952994   30687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-175414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:21:08.953085   30687 ssh_runner.go:195] Run: crio config
	I0815 23:21:09.002052   30687 cni.go:84] Creating CNI manager for ""
	I0815 23:21:09.002073   30687 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 23:21:09.002083   30687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 23:21:09.002110   30687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-175414 NodeName:ha-175414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 23:21:09.002284   30687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-175414"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 23:21:09.002310   30687 kube-vip.go:115] generating kube-vip config ...
	I0815 23:21:09.002358   30687 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 23:21:09.019183   30687 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 23:21:09.019296   30687 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0815 23:21:09.019360   30687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:21:09.029784   30687 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 23:21:09.029863   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 23:21:09.039534   30687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0815 23:21:09.056501   30687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:21:09.073482   30687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0815 23:21:09.089735   30687 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0815 23:21:09.106335   30687 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 23:21:09.110310   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:21:09.122925   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:21:09.246127   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:21:09.263803   30687 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414 for IP: 192.168.39.67
	I0815 23:21:09.263822   30687 certs.go:194] generating shared ca certs ...
	I0815 23:21:09.263836   30687 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.264001   30687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:21:09.264074   30687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:21:09.264087   30687 certs.go:256] generating profile certs ...
	I0815 23:21:09.264187   30687 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key
	I0815 23:21:09.264214   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt with IP's: []
	I0815 23:21:09.320117   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt ...
	I0815 23:21:09.320142   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt: {Name:mkd1d68ac3a3761648f6241a5bda961db1b0339d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.320308   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key ...
	I0815 23:21:09.320319   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key: {Name:mkbb5a5c392511e6cda86c3a57e5cb385c0dab88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.320400   30687 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.20c82d28
	I0815 23:21:09.320428   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.20c82d28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.254]
	I0815 23:21:09.683881   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.20c82d28 ...
	I0815 23:21:09.683908   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.20c82d28: {Name:mkdedd169d9ef2899ccb567dcfb81c1c89a42da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.684062   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.20c82d28 ...
	I0815 23:21:09.684074   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.20c82d28: {Name:mkee298d112daeb0367b95864f61c25cb9dd721d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.684151   30687 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.20c82d28 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt
	I0815 23:21:09.684217   30687 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.20c82d28 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key
	I0815 23:21:09.684268   30687 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key
	I0815 23:21:09.684281   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt with IP's: []
	I0815 23:21:09.860951   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt ...
	I0815 23:21:09.860983   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt: {Name:mkc8b77b93ca3212f3e604b092660415423e7e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.861154   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key ...
	I0815 23:21:09.861166   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key: {Name:mkd60f00950a94e9b4a75caa9bd3e4a6d1de8348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.861235   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:21:09.861251   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:21:09.861262   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:21:09.861275   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:21:09.861286   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:21:09.861300   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:21:09.861313   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:21:09.861325   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:21:09.861371   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:21:09.861408   30687 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:21:09.861418   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:21:09.861477   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:21:09.861505   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:21:09.861526   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:21:09.861563   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:21:09.861589   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:21:09.861604   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:09.861616   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:21:09.862152   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:21:09.888860   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:21:09.914161   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:21:09.939456   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:21:09.965239   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 23:21:09.990555   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 23:21:10.022396   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:21:10.050838   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:21:10.086066   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:21:10.111709   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:21:10.137236   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:21:10.162745   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 23:21:10.188830   30687 ssh_runner.go:195] Run: openssl version
	I0815 23:21:10.195631   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:21:10.207281   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:21:10.212435   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:21:10.212494   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:21:10.219492   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:21:10.231179   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:21:10.242962   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:10.247439   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:10.247508   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:10.253224   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:21:10.264885   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:21:10.276294   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:21:10.280825   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:21:10.280890   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:21:10.287542   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:21:10.300908   30687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:21:10.305355   30687 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 23:21:10.305404   30687 kubeadm.go:392] StartCluster: {Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:21:10.305470   30687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 23:21:10.305507   30687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 23:21:10.343742   30687 cri.go:89] found id: ""
	I0815 23:21:10.343809   30687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 23:21:10.356051   30687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 23:21:10.366669   30687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 23:21:10.377294   30687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 23:21:10.377315   30687 kubeadm.go:157] found existing configuration files:
	
	I0815 23:21:10.377358   30687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 23:21:10.387368   30687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 23:21:10.387429   30687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 23:21:10.397415   30687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 23:21:10.407268   30687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 23:21:10.407329   30687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 23:21:10.417956   30687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 23:21:10.427875   30687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 23:21:10.427934   30687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 23:21:10.438137   30687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 23:21:10.448102   30687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 23:21:10.448151   30687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 23:21:10.458412   30687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 23:21:10.575205   30687 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 23:21:10.575285   30687 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 23:21:10.704641   30687 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 23:21:10.704922   30687 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 23:21:10.705075   30687 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 23:21:10.717110   30687 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 23:21:10.777741   30687 out.go:235]   - Generating certificates and keys ...
	I0815 23:21:10.777907   30687 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 23:21:10.777973   30687 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 23:21:10.809009   30687 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 23:21:11.174417   30687 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 23:21:11.336144   30687 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 23:21:11.502745   30687 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 23:21:11.621432   30687 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 23:21:11.621744   30687 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-175414 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0815 23:21:11.840088   30687 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 23:21:11.840306   30687 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-175414 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0815 23:21:11.982660   30687 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 23:21:12.157923   30687 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 23:21:12.264631   30687 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 23:21:12.264872   30687 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 23:21:12.400847   30687 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 23:21:12.624721   30687 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 23:21:12.804857   30687 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 23:21:13.035081   30687 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 23:21:13.117127   30687 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 23:21:13.117749   30687 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 23:21:13.123359   30687 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 23:21:13.126175   30687 out.go:235]   - Booting up control plane ...
	I0815 23:21:13.126279   30687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 23:21:13.126349   30687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 23:21:13.126408   30687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 23:21:13.142935   30687 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 23:21:13.149543   30687 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 23:21:13.149609   30687 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 23:21:13.281858   30687 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 23:21:13.282000   30687 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 23:21:13.784551   30687 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.916048ms
	I0815 23:21:13.784696   30687 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 23:21:19.785185   30687 kubeadm.go:310] [api-check] The API server is healthy after 6.003512006s
	I0815 23:21:19.805524   30687 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 23:21:19.819540   30687 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 23:21:20.354210   30687 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 23:21:20.354401   30687 kubeadm.go:310] [mark-control-plane] Marking the node ha-175414 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 23:21:20.372454   30687 kubeadm.go:310] [bootstrap-token] Using token: dntkld.gr81o1hgvvlllskg
	I0815 23:21:20.373930   30687 out.go:235]   - Configuring RBAC rules ...
	I0815 23:21:20.374037   30687 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 23:21:20.385231   30687 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 23:21:20.407460   30687 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 23:21:20.411925   30687 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 23:21:20.418358   30687 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 23:21:20.423618   30687 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 23:21:20.443218   30687 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 23:21:20.783008   30687 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 23:21:21.193144   30687 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 23:21:21.194166   30687 kubeadm.go:310] 
	I0815 23:21:21.194244   30687 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 23:21:21.194254   30687 kubeadm.go:310] 
	I0815 23:21:21.194349   30687 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 23:21:21.194374   30687 kubeadm.go:310] 
	I0815 23:21:21.194421   30687 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 23:21:21.194482   30687 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 23:21:21.194528   30687 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 23:21:21.194543   30687 kubeadm.go:310] 
	I0815 23:21:21.194627   30687 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 23:21:21.194637   30687 kubeadm.go:310] 
	I0815 23:21:21.194700   30687 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 23:21:21.194709   30687 kubeadm.go:310] 
	I0815 23:21:21.194781   30687 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 23:21:21.194878   30687 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 23:21:21.194948   30687 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 23:21:21.194955   30687 kubeadm.go:310] 
	I0815 23:21:21.195025   30687 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 23:21:21.195103   30687 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 23:21:21.195114   30687 kubeadm.go:310] 
	I0815 23:21:21.195208   30687 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dntkld.gr81o1hgvvlllskg \
	I0815 23:21:21.195343   30687 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0815 23:21:21.195377   30687 kubeadm.go:310] 	--control-plane 
	I0815 23:21:21.195386   30687 kubeadm.go:310] 
	I0815 23:21:21.195499   30687 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 23:21:21.195514   30687 kubeadm.go:310] 
	I0815 23:21:21.195626   30687 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dntkld.gr81o1hgvvlllskg \
	I0815 23:21:21.195764   30687 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0815 23:21:21.196881   30687 kubeadm.go:310] W0815 23:21:10.556546     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 23:21:21.197276   30687 kubeadm.go:310] W0815 23:21:10.557539     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 23:21:21.197416   30687 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 23:21:21.197446   30687 cni.go:84] Creating CNI manager for ""
	I0815 23:21:21.197458   30687 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 23:21:21.200088   30687 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 23:21:21.201397   30687 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 23:21:21.206932   30687 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 23:21:21.206953   30687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 23:21:21.232482   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 23:21:21.647423   30687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 23:21:21.647526   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:21.647530   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-175414 minikube.k8s.io/updated_at=2024_08_15T23_21_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=ha-175414 minikube.k8s.io/primary=true
	I0815 23:21:21.672169   30687 ops.go:34] apiserver oom_adj: -16
	I0815 23:21:21.815053   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:22.315198   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:22.815789   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:23.315041   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:23.816127   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:24.315181   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:24.816078   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:25.315642   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:25.424471   30687 kubeadm.go:1113] duration metric: took 3.777007269s to wait for elevateKubeSystemPrivileges
	I0815 23:21:25.424504   30687 kubeadm.go:394] duration metric: took 15.11910366s to StartCluster
	I0815 23:21:25.424526   30687 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:25.424595   30687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:21:25.425176   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:25.425384   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 23:21:25.425386   30687 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:21:25.425404   30687 start.go:241] waiting for startup goroutines ...
	I0815 23:21:25.425417   30687 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 23:21:25.425507   30687 addons.go:69] Setting storage-provisioner=true in profile "ha-175414"
	I0815 23:21:25.425514   30687 addons.go:69] Setting default-storageclass=true in profile "ha-175414"
	I0815 23:21:25.425547   30687 addons.go:234] Setting addon storage-provisioner=true in "ha-175414"
	I0815 23:21:25.425545   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:25.425557   30687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-175414"
	I0815 23:21:25.425579   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:21:25.426050   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.426050   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.426085   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.426100   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.440949   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
	I0815 23:21:25.441245   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0815 23:21:25.441438   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.441604   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.441945   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.441961   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.442103   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.442134   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.442257   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.442395   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:25.442428   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.442954   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.442999   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.444609   30687 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:21:25.444943   30687 kapi.go:59] client config for ha-175414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key", CAFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 23:21:25.445397   30687 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 23:21:25.445710   30687 addons.go:234] Setting addon default-storageclass=true in "ha-175414"
	I0815 23:21:25.445752   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:21:25.446143   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.446186   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.457427   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0815 23:21:25.457859   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.458333   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.458355   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.458658   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.458857   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:25.460360   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:25.461000   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0815 23:21:25.461383   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.461798   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.461814   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.462103   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.462419   30687 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 23:21:25.462552   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.462577   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.463676   30687 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 23:21:25.463690   30687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 23:21:25.463703   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:25.466871   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:25.467301   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:25.467326   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:25.467494   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:25.467660   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:25.467813   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:25.467933   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:25.478098   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0815 23:21:25.478410   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.478829   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.478850   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.479138   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.479297   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:25.480685   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:25.480874   30687 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 23:21:25.480888   30687 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 23:21:25.480905   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:25.483386   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:25.483724   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:25.483751   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:25.483880   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:25.484034   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:25.484165   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:25.484297   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:25.540010   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 23:21:25.607050   30687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 23:21:25.687685   30687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 23:21:25.863648   30687 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 23:21:26.279811   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.279832   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.280175   30687 main.go:141] libmachine: (ha-175414) DBG | Closing plugin on server side
	I0815 23:21:26.280227   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.280243   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.280256   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.280264   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.280474   30687 main.go:141] libmachine: (ha-175414) DBG | Closing plugin on server side
	I0815 23:21:26.280474   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.280494   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.280530   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.280550   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.280545   30687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 23:21:26.280615   30687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 23:21:26.280713   30687 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0815 23:21:26.280729   30687 round_trippers.go:469] Request Headers:
	I0815 23:21:26.280738   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:21:26.280742   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:21:26.280777   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.280792   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.280800   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.280808   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.281001   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.281015   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.308678   30687 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0815 23:21:26.309209   30687 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0815 23:21:26.309225   30687 round_trippers.go:469] Request Headers:
	I0815 23:21:26.309235   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:21:26.309240   30687 round_trippers.go:473]     Content-Type: application/json
	I0815 23:21:26.309245   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:21:26.313823   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:21:26.314106   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.314121   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.314417   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.314433   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.316428   30687 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0815 23:21:26.317782   30687 addons.go:510] duration metric: took 892.367472ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0815 23:21:26.317815   30687 start.go:246] waiting for cluster config update ...
	I0815 23:21:26.317836   30687 start.go:255] writing updated cluster config ...
	I0815 23:21:26.319656   30687 out.go:201] 
	I0815 23:21:26.321129   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:26.321199   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:21:26.322990   30687 out.go:177] * Starting "ha-175414-m02" control-plane node in "ha-175414" cluster
	I0815 23:21:26.324296   30687 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:21:26.324316   30687 cache.go:56] Caching tarball of preloaded images
	I0815 23:21:26.324408   30687 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:21:26.324422   30687 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:21:26.324480   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:21:26.324632   30687 start.go:360] acquireMachinesLock for ha-175414-m02: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:21:26.324673   30687 start.go:364] duration metric: took 21.951µs to acquireMachinesLock for "ha-175414-m02"
	I0815 23:21:26.324694   30687 start.go:93] Provisioning new machine with config: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:21:26.324765   30687 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0815 23:21:26.326550   30687 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 23:21:26.326626   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:26.326649   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:26.341201   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0815 23:21:26.341635   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:26.342246   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:26.342270   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:26.342629   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:26.342937   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetMachineName
	I0815 23:21:26.343118   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:26.343297   30687 start.go:159] libmachine.API.Create for "ha-175414" (driver="kvm2")
	I0815 23:21:26.343323   30687 client.go:168] LocalClient.Create starting
	I0815 23:21:26.343359   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0815 23:21:26.343401   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:21:26.343421   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:21:26.343487   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0815 23:21:26.343513   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:21:26.343529   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:21:26.343552   30687 main.go:141] libmachine: Running pre-create checks...
	I0815 23:21:26.343563   30687 main.go:141] libmachine: (ha-175414-m02) Calling .PreCreateCheck
	I0815 23:21:26.343722   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetConfigRaw
	I0815 23:21:26.344139   30687 main.go:141] libmachine: Creating machine...
	I0815 23:21:26.344155   30687 main.go:141] libmachine: (ha-175414-m02) Calling .Create
	I0815 23:21:26.344282   30687 main.go:141] libmachine: (ha-175414-m02) Creating KVM machine...
	I0815 23:21:26.345587   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found existing default KVM network
	I0815 23:21:26.345727   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found existing private KVM network mk-ha-175414
	I0815 23:21:26.345866   30687 main.go:141] libmachine: (ha-175414-m02) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02 ...
	I0815 23:21:26.345890   30687 main.go:141] libmachine: (ha-175414-m02) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:21:26.345952   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:26.345831   31039 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:21:26.346061   30687 main.go:141] libmachine: (ha-175414-m02) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 23:21:26.604260   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:26.604134   31039 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa...
	I0815 23:21:26.747993   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:26.747888   31039 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/ha-175414-m02.rawdisk...
	I0815 23:21:26.748025   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Writing magic tar header
	I0815 23:21:26.748041   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Writing SSH key tar header
	I0815 23:21:26.748053   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:26.748013   31039 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02 ...
	I0815 23:21:26.748135   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02
	I0815 23:21:26.748160   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0815 23:21:26.748174   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02 (perms=drwx------)
	I0815 23:21:26.748188   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0815 23:21:26.748200   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:21:26.748217   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0815 23:21:26.748227   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 23:21:26.748258   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins
	I0815 23:21:26.748284   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0815 23:21:26.748293   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home
	I0815 23:21:26.748308   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Skipping /home - not owner
	I0815 23:21:26.748321   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0815 23:21:26.748330   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 23:21:26.748340   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 23:21:26.748354   30687 main.go:141] libmachine: (ha-175414-m02) Creating domain...
	I0815 23:21:26.749332   30687 main.go:141] libmachine: (ha-175414-m02) define libvirt domain using xml: 
	I0815 23:21:26.749355   30687 main.go:141] libmachine: (ha-175414-m02) <domain type='kvm'>
	I0815 23:21:26.749365   30687 main.go:141] libmachine: (ha-175414-m02)   <name>ha-175414-m02</name>
	I0815 23:21:26.749378   30687 main.go:141] libmachine: (ha-175414-m02)   <memory unit='MiB'>2200</memory>
	I0815 23:21:26.749389   30687 main.go:141] libmachine: (ha-175414-m02)   <vcpu>2</vcpu>
	I0815 23:21:26.749398   30687 main.go:141] libmachine: (ha-175414-m02)   <features>
	I0815 23:21:26.749408   30687 main.go:141] libmachine: (ha-175414-m02)     <acpi/>
	I0815 23:21:26.749415   30687 main.go:141] libmachine: (ha-175414-m02)     <apic/>
	I0815 23:21:26.749427   30687 main.go:141] libmachine: (ha-175414-m02)     <pae/>
	I0815 23:21:26.749436   30687 main.go:141] libmachine: (ha-175414-m02)     
	I0815 23:21:26.749448   30687 main.go:141] libmachine: (ha-175414-m02)   </features>
	I0815 23:21:26.749455   30687 main.go:141] libmachine: (ha-175414-m02)   <cpu mode='host-passthrough'>
	I0815 23:21:26.749482   30687 main.go:141] libmachine: (ha-175414-m02)   
	I0815 23:21:26.749497   30687 main.go:141] libmachine: (ha-175414-m02)   </cpu>
	I0815 23:21:26.749511   30687 main.go:141] libmachine: (ha-175414-m02)   <os>
	I0815 23:21:26.749522   30687 main.go:141] libmachine: (ha-175414-m02)     <type>hvm</type>
	I0815 23:21:26.749534   30687 main.go:141] libmachine: (ha-175414-m02)     <boot dev='cdrom'/>
	I0815 23:21:26.749544   30687 main.go:141] libmachine: (ha-175414-m02)     <boot dev='hd'/>
	I0815 23:21:26.749561   30687 main.go:141] libmachine: (ha-175414-m02)     <bootmenu enable='no'/>
	I0815 23:21:26.749575   30687 main.go:141] libmachine: (ha-175414-m02)   </os>
	I0815 23:21:26.749583   30687 main.go:141] libmachine: (ha-175414-m02)   <devices>
	I0815 23:21:26.749592   30687 main.go:141] libmachine: (ha-175414-m02)     <disk type='file' device='cdrom'>
	I0815 23:21:26.749607   30687 main.go:141] libmachine: (ha-175414-m02)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/boot2docker.iso'/>
	I0815 23:21:26.749618   30687 main.go:141] libmachine: (ha-175414-m02)       <target dev='hdc' bus='scsi'/>
	I0815 23:21:26.749628   30687 main.go:141] libmachine: (ha-175414-m02)       <readonly/>
	I0815 23:21:26.749638   30687 main.go:141] libmachine: (ha-175414-m02)     </disk>
	I0815 23:21:26.749653   30687 main.go:141] libmachine: (ha-175414-m02)     <disk type='file' device='disk'>
	I0815 23:21:26.749670   30687 main.go:141] libmachine: (ha-175414-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 23:21:26.749687   30687 main.go:141] libmachine: (ha-175414-m02)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/ha-175414-m02.rawdisk'/>
	I0815 23:21:26.749698   30687 main.go:141] libmachine: (ha-175414-m02)       <target dev='hda' bus='virtio'/>
	I0815 23:21:26.749709   30687 main.go:141] libmachine: (ha-175414-m02)     </disk>
	I0815 23:21:26.749716   30687 main.go:141] libmachine: (ha-175414-m02)     <interface type='network'>
	I0815 23:21:26.749723   30687 main.go:141] libmachine: (ha-175414-m02)       <source network='mk-ha-175414'/>
	I0815 23:21:26.749733   30687 main.go:141] libmachine: (ha-175414-m02)       <model type='virtio'/>
	I0815 23:21:26.749741   30687 main.go:141] libmachine: (ha-175414-m02)     </interface>
	I0815 23:21:26.749749   30687 main.go:141] libmachine: (ha-175414-m02)     <interface type='network'>
	I0815 23:21:26.749757   30687 main.go:141] libmachine: (ha-175414-m02)       <source network='default'/>
	I0815 23:21:26.749761   30687 main.go:141] libmachine: (ha-175414-m02)       <model type='virtio'/>
	I0815 23:21:26.749772   30687 main.go:141] libmachine: (ha-175414-m02)     </interface>
	I0815 23:21:26.749777   30687 main.go:141] libmachine: (ha-175414-m02)     <serial type='pty'>
	I0815 23:21:26.749784   30687 main.go:141] libmachine: (ha-175414-m02)       <target port='0'/>
	I0815 23:21:26.749788   30687 main.go:141] libmachine: (ha-175414-m02)     </serial>
	I0815 23:21:26.749794   30687 main.go:141] libmachine: (ha-175414-m02)     <console type='pty'>
	I0815 23:21:26.749799   30687 main.go:141] libmachine: (ha-175414-m02)       <target type='serial' port='0'/>
	I0815 23:21:26.749804   30687 main.go:141] libmachine: (ha-175414-m02)     </console>
	I0815 23:21:26.749809   30687 main.go:141] libmachine: (ha-175414-m02)     <rng model='virtio'>
	I0815 23:21:26.749815   30687 main.go:141] libmachine: (ha-175414-m02)       <backend model='random'>/dev/random</backend>
	I0815 23:21:26.749823   30687 main.go:141] libmachine: (ha-175414-m02)     </rng>
	I0815 23:21:26.749833   30687 main.go:141] libmachine: (ha-175414-m02)     
	I0815 23:21:26.749851   30687 main.go:141] libmachine: (ha-175414-m02)     
	I0815 23:21:26.749865   30687 main.go:141] libmachine: (ha-175414-m02)   </devices>
	I0815 23:21:26.749881   30687 main.go:141] libmachine: (ha-175414-m02) </domain>
	I0815 23:21:26.749892   30687 main.go:141] libmachine: (ha-175414-m02) 
	I0815 23:21:26.756597   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:05:fa:e3 in network default
	I0815 23:21:26.757244   30687 main.go:141] libmachine: (ha-175414-m02) Ensuring networks are active...
	I0815 23:21:26.757271   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:26.758064   30687 main.go:141] libmachine: (ha-175414-m02) Ensuring network default is active
	I0815 23:21:26.758418   30687 main.go:141] libmachine: (ha-175414-m02) Ensuring network mk-ha-175414 is active
	I0815 23:21:26.758877   30687 main.go:141] libmachine: (ha-175414-m02) Getting domain xml...
	I0815 23:21:26.759855   30687 main.go:141] libmachine: (ha-175414-m02) Creating domain...
	I0815 23:21:28.013030   30687 main.go:141] libmachine: (ha-175414-m02) Waiting to get IP...
	I0815 23:21:28.013743   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:28.014200   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:28.014249   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:28.014192   31039 retry.go:31] will retry after 225.305823ms: waiting for machine to come up
	I0815 23:21:28.241736   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:28.242241   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:28.242274   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:28.242190   31039 retry.go:31] will retry after 251.988652ms: waiting for machine to come up
	I0815 23:21:28.495601   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:28.496087   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:28.496114   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:28.496054   31039 retry.go:31] will retry after 437.060646ms: waiting for machine to come up
	I0815 23:21:28.934522   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:28.935040   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:28.935067   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:28.934984   31039 retry.go:31] will retry after 464.445073ms: waiting for machine to come up
	I0815 23:21:29.401028   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:29.401961   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:29.401982   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:29.401913   31039 retry.go:31] will retry after 530.494313ms: waiting for machine to come up
	I0815 23:21:29.933553   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:29.933978   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:29.934006   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:29.933949   31039 retry.go:31] will retry after 641.182632ms: waiting for machine to come up
	I0815 23:21:30.576770   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:30.577186   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:30.577214   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:30.577154   31039 retry.go:31] will retry after 895.397592ms: waiting for machine to come up
	I0815 23:21:31.474027   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:31.474548   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:31.474581   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:31.474479   31039 retry.go:31] will retry after 1.179069294s: waiting for machine to come up
	I0815 23:21:32.655638   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:32.656123   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:32.656150   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:32.656088   31039 retry.go:31] will retry after 1.458887896s: waiting for machine to come up
	I0815 23:21:34.116818   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:34.117301   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:34.117325   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:34.117257   31039 retry.go:31] will retry after 1.696682837s: waiting for machine to come up
	I0815 23:21:35.816124   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:35.816725   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:35.816752   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:35.816660   31039 retry.go:31] will retry after 2.009785233s: waiting for machine to come up
	I0815 23:21:37.828384   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:37.828788   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:37.828817   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:37.828737   31039 retry.go:31] will retry after 3.146592515s: waiting for machine to come up
	I0815 23:21:40.978898   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:40.979296   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:40.979320   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:40.979241   31039 retry.go:31] will retry after 2.776399607s: waiting for machine to come up
	I0815 23:21:43.758501   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:43.758923   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:43.758946   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:43.758886   31039 retry.go:31] will retry after 4.758298763s: waiting for machine to come up
	I0815 23:21:48.520002   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.520447   30687 main.go:141] libmachine: (ha-175414-m02) Found IP for machine: 192.168.39.19
	I0815 23:21:48.520466   30687 main.go:141] libmachine: (ha-175414-m02) Reserving static IP address...
	I0815 23:21:48.520479   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has current primary IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.520816   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find host DHCP lease matching {name: "ha-175414-m02", mac: "52:54:00:3f:bf:67", ip: "192.168.39.19"} in network mk-ha-175414
	I0815 23:21:48.592403   30687 main.go:141] libmachine: (ha-175414-m02) Reserved static IP address: 192.168.39.19
	I0815 23:21:48.592434   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Getting to WaitForSSH function...
	I0815 23:21:48.592443   30687 main.go:141] libmachine: (ha-175414-m02) Waiting for SSH to be available...
	I0815 23:21:48.595218   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.595698   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:48.595728   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.595888   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Using SSH client type: external
	I0815 23:21:48.595911   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa (-rw-------)
	I0815 23:21:48.595941   30687 main.go:141] libmachine: (ha-175414-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:21:48.595954   30687 main.go:141] libmachine: (ha-175414-m02) DBG | About to run SSH command:
	I0815 23:21:48.595967   30687 main.go:141] libmachine: (ha-175414-m02) DBG | exit 0
	I0815 23:21:48.725957   30687 main.go:141] libmachine: (ha-175414-m02) DBG | SSH cmd err, output: <nil>: 
	I0815 23:21:48.726223   30687 main.go:141] libmachine: (ha-175414-m02) KVM machine creation complete!
	I0815 23:21:48.726537   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetConfigRaw
	I0815 23:21:48.727043   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:48.727249   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:48.727391   30687 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 23:21:48.727406   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:21:48.728641   30687 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 23:21:48.728653   30687 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 23:21:48.728658   30687 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 23:21:48.728666   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:48.730629   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.730945   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:48.730983   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.731145   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:48.731320   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.731459   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.731574   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:48.731722   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:48.731965   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:48.731979   30687 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 23:21:48.845468   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:21:48.845491   30687 main.go:141] libmachine: Detecting the provisioner...
	I0815 23:21:48.845499   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:48.848642   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.849140   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:48.849167   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.849324   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:48.849529   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.849692   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.849873   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:48.850060   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:48.850222   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:48.850233   30687 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 23:21:48.962809   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 23:21:48.962857   30687 main.go:141] libmachine: found compatible host: buildroot
	I0815 23:21:48.962864   30687 main.go:141] libmachine: Provisioning with buildroot...
	I0815 23:21:48.962871   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetMachineName
	I0815 23:21:48.963237   30687 buildroot.go:166] provisioning hostname "ha-175414-m02"
	I0815 23:21:48.963267   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetMachineName
	I0815 23:21:48.963457   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:48.966351   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.966740   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:48.966766   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.966866   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:48.967052   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.967199   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.967327   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:48.967460   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:48.967653   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:48.967670   30687 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-175414-m02 && echo "ha-175414-m02" | sudo tee /etc/hostname
	I0815 23:21:49.097358   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414-m02
	
	I0815 23:21:49.097399   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.099970   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.100296   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.100323   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.100652   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.100826   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.101009   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.101147   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.101325   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:49.101532   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:49.101549   30687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-175414-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-175414-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-175414-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:21:49.223309   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:21:49.223337   30687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:21:49.223356   30687 buildroot.go:174] setting up certificates
	I0815 23:21:49.223369   30687 provision.go:84] configureAuth start
	I0815 23:21:49.223382   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetMachineName
	I0815 23:21:49.223658   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:21:49.226551   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.226912   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.226937   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.227060   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.229486   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.229820   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.229858   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.230009   30687 provision.go:143] copyHostCerts
	I0815 23:21:49.230034   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:21:49.230062   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:21:49.230070   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:21:49.230157   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:21:49.230229   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:21:49.230246   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:21:49.230252   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:21:49.230279   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:21:49.230321   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:21:49.230337   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:21:49.230344   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:21:49.230363   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:21:49.230411   30687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.ha-175414-m02 san=[127.0.0.1 192.168.39.19 ha-175414-m02 localhost minikube]
	I0815 23:21:49.393186   30687 provision.go:177] copyRemoteCerts
	I0815 23:21:49.393242   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:21:49.393262   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.396221   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.396535   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.396564   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.396718   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.396904   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.397131   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.397244   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:21:49.486103   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:21:49.486171   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:21:49.511552   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:21:49.511621   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 23:21:49.535771   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:21:49.535862   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 23:21:49.559512   30687 provision.go:87] duration metric: took 336.130825ms to configureAuth
	I0815 23:21:49.559545   30687 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:21:49.559771   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:49.559852   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.562400   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.562773   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.562795   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.562975   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.563175   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.563341   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.563454   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.563587   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:49.563763   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:49.563777   30687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:21:49.836024   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:21:49.836053   30687 main.go:141] libmachine: Checking connection to Docker...
	I0815 23:21:49.836163   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetURL
	I0815 23:21:49.837426   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Using libvirt version 6000000
	I0815 23:21:49.839557   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.839847   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.839868   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.840024   30687 main.go:141] libmachine: Docker is up and running!
	I0815 23:21:49.840039   30687 main.go:141] libmachine: Reticulating splines...
	I0815 23:21:49.840045   30687 client.go:171] duration metric: took 23.496715133s to LocalClient.Create
	I0815 23:21:49.840065   30687 start.go:167] duration metric: took 23.496770406s to libmachine.API.Create "ha-175414"
	I0815 23:21:49.840073   30687 start.go:293] postStartSetup for "ha-175414-m02" (driver="kvm2")
	I0815 23:21:49.840082   30687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:21:49.840097   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:49.840318   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:21:49.840336   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.842471   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.842793   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.842823   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.842919   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.843081   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.843221   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.843374   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:21:49.929224   30687 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:21:49.933955   30687 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:21:49.933984   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:21:49.934053   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:21:49.934140   30687 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:21:49.934152   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:21:49.934256   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:21:49.946114   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:21:49.971857   30687 start.go:296] duration metric: took 131.770868ms for postStartSetup
	I0815 23:21:49.971911   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetConfigRaw
	I0815 23:21:49.972472   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:21:49.974970   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.975569   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.975593   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.975871   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:21:49.976076   30687 start.go:128] duration metric: took 23.65129957s to createHost
	I0815 23:21:49.976105   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.978338   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.978674   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.978709   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.978853   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.979020   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.979141   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.979279   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.979459   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:49.979629   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:49.979642   30687 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:21:50.094933   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764110.070564150
	
	I0815 23:21:50.094952   30687 fix.go:216] guest clock: 1723764110.070564150
	I0815 23:21:50.094958   30687 fix.go:229] Guest: 2024-08-15 23:21:50.07056415 +0000 UTC Remote: 2024-08-15 23:21:49.976091477 +0000 UTC m=+70.877357108 (delta=94.472673ms)
	I0815 23:21:50.094973   30687 fix.go:200] guest clock delta is within tolerance: 94.472673ms
	I0815 23:21:50.094977   30687 start.go:83] releasing machines lock for "ha-175414-m02", held for 23.770294763s
	I0815 23:21:50.094997   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:50.095269   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:21:50.098173   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.098492   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:50.098525   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.100852   30687 out.go:177] * Found network options:
	I0815 23:21:50.102111   30687 out.go:177]   - NO_PROXY=192.168.39.67
	W0815 23:21:50.103403   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 23:21:50.103430   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:50.103942   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:50.104122   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:50.104191   30687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:21:50.104226   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	W0815 23:21:50.104292   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 23:21:50.104375   30687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:21:50.104398   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:50.106666   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.107067   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.107136   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:50.107161   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.107308   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:50.107452   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:50.107541   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:50.107559   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.107604   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:50.107736   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:50.107805   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:21:50.107873   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:50.108004   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:50.108123   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:21:50.345058   30687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:21:50.351133   30687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:21:50.351196   30687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:21:50.369303   30687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 23:21:50.369336   30687 start.go:495] detecting cgroup driver to use...
	I0815 23:21:50.369407   30687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:21:50.387197   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:21:50.402389   30687 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:21:50.402456   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:21:50.417085   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:21:50.431734   30687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:21:50.561170   30687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:21:50.705184   30687 docker.go:233] disabling docker service ...
	I0815 23:21:50.705263   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:21:50.720438   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:21:50.733936   30687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:21:50.870015   30687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:21:50.993385   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:21:51.007433   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:21:51.026558   30687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:21:51.026622   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.041656   30687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:21:51.041714   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.053502   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.064265   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.074719   30687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:21:51.085392   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.095900   30687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.113421   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.123579   30687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:21:51.132769   30687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 23:21:51.132816   30687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 23:21:51.145489   30687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:21:51.154882   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:21:51.271396   30687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:21:51.410590   30687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:21:51.410664   30687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:21:51.415423   30687 start.go:563] Will wait 60s for crictl version
	I0815 23:21:51.415488   30687 ssh_runner.go:195] Run: which crictl
	I0815 23:21:51.419517   30687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:21:51.458295   30687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:21:51.458387   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:21:51.487145   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:21:51.517696   30687 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:21:51.519076   30687 out.go:177]   - env NO_PROXY=192.168.39.67
	I0815 23:21:51.520269   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:21:51.522990   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:51.523318   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:51.523340   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:51.523524   30687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:21:51.527692   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:21:51.541082   30687 mustload.go:65] Loading cluster: ha-175414
	I0815 23:21:51.541287   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:51.541578   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:51.541605   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:51.556079   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38217
	I0815 23:21:51.556570   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:51.557063   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:51.557087   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:51.557352   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:51.557505   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:51.559104   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:21:51.559371   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:51.559392   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:51.573419   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39849
	I0815 23:21:51.573768   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:51.574205   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:51.574232   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:51.574569   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:51.574765   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:51.574918   30687 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414 for IP: 192.168.39.19
	I0815 23:21:51.574929   30687 certs.go:194] generating shared ca certs ...
	I0815 23:21:51.574946   30687 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:51.575085   30687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:21:51.575137   30687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:21:51.575151   30687 certs.go:256] generating profile certs ...
	I0815 23:21:51.575233   30687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key
	I0815 23:21:51.575263   30687 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.28369cf2
	I0815 23:21:51.575284   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.28369cf2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.19 192.168.39.254]
	I0815 23:21:51.864708   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.28369cf2 ...
	I0815 23:21:51.864746   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.28369cf2: {Name:mk1af29fefa6fcd050dd679013330c0736cb81cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:51.864941   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.28369cf2 ...
	I0815 23:21:51.864958   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.28369cf2: {Name:mk711f6a080c33c4577e6174099e0ff15fdd0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:51.865064   30687 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.28369cf2 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt
	I0815 23:21:51.865191   30687 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.28369cf2 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key
	I0815 23:21:51.865310   30687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key
	I0815 23:21:51.865325   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:21:51.865337   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:21:51.865350   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:21:51.865363   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:21:51.865375   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:21:51.865387   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:21:51.865399   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:21:51.865410   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:21:51.865459   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:21:51.865485   30687 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:21:51.865495   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:21:51.865518   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:21:51.865540   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:21:51.865562   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:21:51.865597   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:21:51.865621   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:21:51.865636   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:51.865648   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:21:51.865677   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:51.868573   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:51.868973   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:51.869003   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:51.869152   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:51.869347   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:51.869497   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:51.869644   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:51.942252   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 23:21:51.947249   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 23:21:51.959431   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 23:21:51.964431   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 23:21:51.975814   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 23:21:51.980044   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 23:21:51.993528   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 23:21:51.998728   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 23:21:52.013690   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 23:21:52.024012   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 23:21:52.037471   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 23:21:52.043042   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 23:21:52.059824   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:21:52.085017   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:21:52.108644   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:21:52.133288   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:21:52.157875   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 23:21:52.183550   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 23:21:52.208624   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:21:52.233447   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:21:52.258959   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:21:52.283917   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:21:52.307721   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:21:52.331804   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 23:21:52.349321   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 23:21:52.366528   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 23:21:52.384149   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 23:21:52.401130   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 23:21:52.417900   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 23:21:52.434872   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 23:21:52.451833   30687 ssh_runner.go:195] Run: openssl version
	I0815 23:21:52.457773   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:21:52.469770   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:52.474461   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:52.474509   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:52.480338   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:21:52.491586   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:21:52.503378   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:21:52.507973   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:21:52.508041   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:21:52.513863   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:21:52.525112   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:21:52.536745   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:21:52.541326   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:21:52.541381   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:21:52.547130   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:21:52.559218   30687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:21:52.563283   30687 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 23:21:52.563329   30687 kubeadm.go:934] updating node {m02 192.168.39.19 8443 v1.31.0 crio true true} ...
	I0815 23:21:52.563410   30687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-175414-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:21:52.563437   30687 kube-vip.go:115] generating kube-vip config ...
	I0815 23:21:52.563476   30687 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 23:21:52.579719   30687 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 23:21:52.579804   30687 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 23:21:52.579861   30687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:21:52.590185   30687 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 23:21:52.590239   30687 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 23:21:52.600604   30687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 23:21:52.600629   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 23:21:52.600695   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 23:21:52.600754   30687 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0815 23:21:52.600789   30687 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0815 23:21:52.605051   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 23:21:52.605083   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 23:21:53.213833   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 23:21:53.213960   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 23:21:53.219202   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 23:21:53.219244   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 23:21:53.283041   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:21:53.326596   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 23:21:53.326693   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 23:21:53.332854   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 23:21:53.332893   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 23:21:53.764470   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 23:21:53.774767   30687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0815 23:21:53.791984   30687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:21:53.809533   30687 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 23:21:53.827678   30687 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 23:21:53.831629   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:21:53.844814   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:21:53.967941   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:21:53.986117   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:21:53.986517   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:53.986556   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:54.001404   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0815 23:21:54.001924   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:54.002379   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:54.002401   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:54.002737   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:54.002924   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:54.003063   30687 start.go:317] joinCluster: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:21:54.003176   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 23:21:54.003191   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:54.006151   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:54.006602   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:54.006629   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:54.006994   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:54.007187   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:54.007315   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:54.007517   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:54.158764   30687 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:21:54.158802   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ndxr0c.wkjp0rvuu46mh8r8 --discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-175414-m02 --control-plane --apiserver-advertise-address=192.168.39.19 --apiserver-bind-port=8443"
	I0815 23:22:15.902670   30687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ndxr0c.wkjp0rvuu46mh8r8 --discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-175414-m02 --control-plane --apiserver-advertise-address=192.168.39.19 --apiserver-bind-port=8443": (21.743838525s)
	I0815 23:22:15.902704   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 23:22:16.457267   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-175414-m02 minikube.k8s.io/updated_at=2024_08_15T23_22_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=ha-175414 minikube.k8s.io/primary=false
	I0815 23:22:16.582820   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-175414-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 23:22:16.711508   30687 start.go:319] duration metric: took 22.708440464s to joinCluster
	I0815 23:22:16.711580   30687 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:22:16.711873   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:22:16.713906   30687 out.go:177] * Verifying Kubernetes components...
	I0815 23:22:16.715266   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:22:16.914959   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:22:16.930821   30687 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:22:16.931180   30687 kapi.go:59] client config for ha-175414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key", CAFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 23:22:16.931267   30687 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I0815 23:22:16.931575   30687 node_ready.go:35] waiting up to 6m0s for node "ha-175414-m02" to be "Ready" ...
	I0815 23:22:16.931718   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:16.931731   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:16.931741   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:16.931748   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:16.945769   30687 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0815 23:22:17.432660   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:17.432681   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:17.432688   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:17.432693   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:17.438224   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:17.931830   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:17.931856   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:17.931867   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:17.931872   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:17.934928   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:18.431815   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:18.431841   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:18.431850   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:18.431858   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:18.435978   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:18.932756   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:18.932779   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:18.932789   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:18.932794   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:18.944563   30687 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0815 23:22:18.945291   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:19.431861   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:19.431881   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:19.431889   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:19.431893   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:19.434996   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:19.932054   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:19.932081   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:19.932092   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:19.932099   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:19.935557   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:20.432316   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:20.432342   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:20.432354   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:20.432359   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:20.436797   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:20.932722   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:20.932746   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:20.932762   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:20.932766   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:21.020664   30687 round_trippers.go:574] Response Status: 200 OK in 87 milliseconds
	I0815 23:22:21.021343   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:21.431996   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:21.432022   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:21.432032   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:21.432039   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:21.434662   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:21.932425   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:21.932451   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:21.932462   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:21.932467   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:21.939310   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:22:22.432822   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:22.432851   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:22.432863   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:22.432869   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:22.438244   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:22.932102   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:22.932126   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:22.932135   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:22.932140   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:22.935256   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:23.432155   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:23.432176   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:23.432186   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:23.432192   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:23.435979   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:23.436385   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:23.931786   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:23.931808   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:23.931824   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:23.931829   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:23.935115   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:24.431884   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:24.431905   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:24.431912   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:24.431916   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:24.434787   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:24.932306   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:24.932329   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:24.932337   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:24.932342   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:24.935665   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:25.431909   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:25.431927   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:25.431935   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:25.431940   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:25.435398   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:25.931790   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:25.931809   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:25.931817   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:25.931820   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:25.937748   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:25.938566   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:26.431951   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:26.431978   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:26.431989   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:26.431996   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:26.435337   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:26.931971   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:26.931994   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:26.932002   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:26.932006   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:26.935332   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:27.432613   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:27.432637   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:27.432645   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:27.432650   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:27.435773   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:27.931800   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:27.931822   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:27.931830   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:27.931834   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:27.935039   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:28.431819   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:28.431840   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:28.431848   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:28.431851   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:28.434566   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:28.435017   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:28.932436   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:28.932458   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:28.932466   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:28.932471   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:28.936039   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:29.432176   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:29.432201   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:29.432211   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:29.432216   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:29.435339   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:29.932422   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:29.932449   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:29.932460   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:29.932465   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:29.936328   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:30.432389   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:30.432410   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:30.432417   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:30.432420   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:30.435406   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:30.436249   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:30.932709   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:30.932731   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:30.932738   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:30.932742   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:30.936181   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:31.432570   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:31.432590   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:31.432597   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:31.432601   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:31.436728   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:31.932741   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:31.932770   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:31.932780   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:31.932784   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:31.935947   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:32.431901   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:32.431924   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.431932   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.431935   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.435310   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:32.932661   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:32.932680   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.932689   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.932693   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.946799   30687 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0815 23:22:32.947786   30687 node_ready.go:49] node "ha-175414-m02" has status "Ready":"True"
	I0815 23:22:32.947805   30687 node_ready.go:38] duration metric: took 16.016188008s for node "ha-175414-m02" to be "Ready" ...
	I0815 23:22:32.947812   30687 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:22:32.947881   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:32.947892   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.947901   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.947911   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.967321   30687 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0815 23:22:32.974011   30687 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:32.974102   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vkm5s
	I0815 23:22:32.974112   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.974119   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.974122   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.983281   30687 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0815 23:22:32.983883   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:32.983900   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.983907   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.983912   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.989285   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:32.989816   30687 pod_ready.go:93] pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:32.989837   30687 pod_ready.go:82] duration metric: took 15.801916ms for pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:32.989861   30687 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:32.989934   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zrv4c
	I0815 23:22:32.989951   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.989962   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.989970   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.996009   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:22:32.996645   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:32.996659   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.996667   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.996683   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.007664   30687 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 23:22:33.008216   30687 pod_ready.go:93] pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.008234   30687 pod_ready.go:82] duration metric: took 18.36539ms for pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.008245   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.008313   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414
	I0815 23:22:33.008325   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.008335   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.008344   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.013455   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:33.014148   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:33.014183   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.014193   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.014200   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.018177   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:33.019329   30687 pod_ready.go:93] pod "etcd-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.019357   30687 pod_ready.go:82] duration metric: took 11.103182ms for pod "etcd-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.019374   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.019450   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414-m02
	I0815 23:22:33.019457   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.019467   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.019473   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.023241   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:33.023952   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:33.023968   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.023976   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.023980   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.026457   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:33.026972   30687 pod_ready.go:93] pod "etcd-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.027001   30687 pod_ready.go:82] duration metric: took 7.618346ms for pod "etcd-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.027017   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.133348   30687 request.go:632] Waited for 106.269823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414
	I0815 23:22:33.133406   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414
	I0815 23:22:33.133411   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.133418   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.133422   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.137569   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:33.332905   30687 request.go:632] Waited for 194.308147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:33.332973   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:33.332981   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.332993   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.332998   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.336570   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:33.337178   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.337210   30687 pod_ready.go:82] duration metric: took 310.183521ms for pod "kube-apiserver-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.337235   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.533459   30687 request.go:632] Waited for 196.156079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m02
	I0815 23:22:33.533539   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m02
	I0815 23:22:33.533548   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.533561   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.533569   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.537905   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:33.733038   30687 request.go:632] Waited for 194.38316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:33.733106   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:33.733114   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.733122   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.733130   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.736641   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:33.737513   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.737531   30687 pod_ready.go:82] duration metric: took 400.289414ms for pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.737540   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.933661   30687 request.go:632] Waited for 196.059272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414
	I0815 23:22:33.933720   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414
	I0815 23:22:33.933725   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.933731   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.933735   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.937028   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:34.133387   30687 request.go:632] Waited for 195.365027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:34.133433   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:34.133438   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.133445   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.133448   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.136475   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:34.136976   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:34.136995   30687 pod_ready.go:82] duration metric: took 399.448393ms for pod "kube-controller-manager-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.137005   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.333497   30687 request.go:632] Waited for 196.419328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m02
	I0815 23:22:34.333551   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m02
	I0815 23:22:34.333557   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.333564   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.333568   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.337210   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:34.533398   30687 request.go:632] Waited for 195.343135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:34.533449   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:34.533454   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.533461   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.533466   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.537225   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:34.537735   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:34.537753   30687 pod_ready.go:82] duration metric: took 400.740862ms for pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.537762   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4frcn" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.732804   30687 request.go:632] Waited for 194.975905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frcn
	I0815 23:22:34.732879   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frcn
	I0815 23:22:34.732884   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.732892   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.732896   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.737205   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:34.933417   30687 request.go:632] Waited for 195.403534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:34.933478   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:34.933484   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.933491   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.933496   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.938383   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:34.938857   30687 pod_ready.go:93] pod "kube-proxy-4frcn" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:34.938875   30687 pod_ready.go:82] duration metric: took 401.107272ms for pod "kube-proxy-4frcn" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.938884   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dcnmc" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.133103   30687 request.go:632] Waited for 194.151951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcnmc
	I0815 23:22:35.133181   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcnmc
	I0815 23:22:35.133186   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.133194   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.133197   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.137367   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:35.333458   30687 request.go:632] Waited for 195.373306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:35.333528   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:35.333534   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.333541   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.333545   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.337654   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:35.338615   30687 pod_ready.go:93] pod "kube-proxy-dcnmc" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:35.338633   30687 pod_ready.go:82] duration metric: took 399.743347ms for pod "kube-proxy-dcnmc" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.338640   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.532934   30687 request.go:632] Waited for 194.214051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414
	I0815 23:22:35.532997   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414
	I0815 23:22:35.533003   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.533011   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.533014   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.537133   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:35.733165   30687 request.go:632] Waited for 195.403533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:35.733240   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:35.733249   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.733260   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.733273   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.736841   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:35.737423   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:35.737439   30687 pod_ready.go:82] duration metric: took 398.792945ms for pod "kube-scheduler-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.737448   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.933530   30687 request.go:632] Waited for 196.011478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m02
	I0815 23:22:35.933610   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m02
	I0815 23:22:35.933616   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.933623   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.933628   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.936841   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:36.132797   30687 request.go:632] Waited for 195.30673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:36.132870   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:36.132877   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.132887   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.132892   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.136147   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:36.136724   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:36.136748   30687 pod_ready.go:82] duration metric: took 399.292336ms for pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:36.136759   30687 pod_ready.go:39] duration metric: took 3.188935798s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:22:36.136774   30687 api_server.go:52] waiting for apiserver process to appear ...
	I0815 23:22:36.136822   30687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:22:36.152417   30687 api_server.go:72] duration metric: took 19.440801659s to wait for apiserver process to appear ...
	I0815 23:22:36.152446   30687 api_server.go:88] waiting for apiserver healthz status ...
	I0815 23:22:36.152469   30687 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0815 23:22:36.157227   30687 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0815 23:22:36.157300   30687 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I0815 23:22:36.157311   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.157322   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.157327   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.158258   30687 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 23:22:36.158464   30687 api_server.go:141] control plane version: v1.31.0
	I0815 23:22:36.158490   30687 api_server.go:131] duration metric: took 6.036229ms to wait for apiserver health ...
	I0815 23:22:36.158499   30687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 23:22:36.332764   30687 request.go:632] Waited for 174.201426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:36.332854   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:36.332863   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.332875   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.332886   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.340757   30687 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 23:22:36.345753   30687 system_pods.go:59] 17 kube-system pods found
	I0815 23:22:36.345792   30687 system_pods.go:61] "coredns-6f6b679f8f-vkm5s" [1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c] Running
	I0815 23:22:36.345799   30687 system_pods.go:61] "coredns-6f6b679f8f-zrv4c" [97d399d0-871e-4e59-8c4d-093b5a29a107] Running
	I0815 23:22:36.345805   30687 system_pods.go:61] "etcd-ha-175414" [8358595a-b7fc-40b0-b3a1-8bce46f618dd] Running
	I0815 23:22:36.345812   30687 system_pods.go:61] "etcd-ha-175414-m02" [fd9e81e9-bfd2-4040-9425-06a84b9c3dda] Running
	I0815 23:22:36.345817   30687 system_pods.go:61] "kindnet-47nts" [969ed4f0-c372-4d22-ba84-cfcd5774f1cf] Running
	I0815 23:22:36.345825   30687 system_pods.go:61] "kindnet-jjcdm" [534a226d-c0b6-4a2f-8b2c-27921c9e1aca] Running
	I0815 23:22:36.345833   30687 system_pods.go:61] "kube-apiserver-ha-175414" [74c0c52d-72f6-425e-ba1e-047ebb890ed4] Running
	I0815 23:22:36.345854   30687 system_pods.go:61] "kube-apiserver-ha-175414-m02" [019a6c53-1d80-40a3-93ea-6179c12e17ed] Running
	I0815 23:22:36.345864   30687 system_pods.go:61] "kube-controller-manager-ha-175414" [88aeb420-f593-4e18-8149-6fe48fd85b7d] Running
	I0815 23:22:36.345871   30687 system_pods.go:61] "kube-controller-manager-ha-175414-m02" [be3e762b-556f-4881-9a29-c9a867ccb5e7] Running
	I0815 23:22:36.345878   30687 system_pods.go:61] "kube-proxy-4frcn" [2831334a-a379-4f6d-ada3-53a01fc6f65e] Running
	I0815 23:22:36.345884   30687 system_pods.go:61] "kube-proxy-dcnmc" [572a1e80-23b0-4cb9-bfab-067b6853226d] Running
	I0815 23:22:36.345892   30687 system_pods.go:61] "kube-scheduler-ha-175414" [7463fcbb-2a5f-4101-8b25-f72c74ca515a] Running
	I0815 23:22:36.345898   30687 system_pods.go:61] "kube-scheduler-ha-175414-m02" [1e5715dc-154a-4669-8a4e-986bb989a16b] Running
	I0815 23:22:36.345908   30687 system_pods.go:61] "kube-vip-ha-175414" [6b98571e-8ad5-45e0-acbc-d0e875647a69] Running
	I0815 23:22:36.345914   30687 system_pods.go:61] "kube-vip-ha-175414-m02" [4877d97c-4adb-4ce8-813f-0819e8a96b5a] Running
	I0815 23:22:36.345920   30687 system_pods.go:61] "storage-provisioner" [7042d764-6043-449c-a1e9-aaa28256c579] Running
	I0815 23:22:36.345928   30687 system_pods.go:74] duration metric: took 187.421636ms to wait for pod list to return data ...
	I0815 23:22:36.345940   30687 default_sa.go:34] waiting for default service account to be created ...
	I0815 23:22:36.533732   30687 request.go:632] Waited for 187.721428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I0815 23:22:36.533801   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I0815 23:22:36.533813   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.533824   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.533831   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.537689   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:36.537926   30687 default_sa.go:45] found service account: "default"
	I0815 23:22:36.537946   30687 default_sa.go:55] duration metric: took 191.997547ms for default service account to be created ...
	I0815 23:22:36.537953   30687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 23:22:36.732805   30687 request.go:632] Waited for 194.768976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:36.732891   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:36.732902   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.732914   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.732924   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.737657   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:36.742379   30687 system_pods.go:86] 17 kube-system pods found
	I0815 23:22:36.742407   30687 system_pods.go:89] "coredns-6f6b679f8f-vkm5s" [1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c] Running
	I0815 23:22:36.742414   30687 system_pods.go:89] "coredns-6f6b679f8f-zrv4c" [97d399d0-871e-4e59-8c4d-093b5a29a107] Running
	I0815 23:22:36.742420   30687 system_pods.go:89] "etcd-ha-175414" [8358595a-b7fc-40b0-b3a1-8bce46f618dd] Running
	I0815 23:22:36.742425   30687 system_pods.go:89] "etcd-ha-175414-m02" [fd9e81e9-bfd2-4040-9425-06a84b9c3dda] Running
	I0815 23:22:36.742429   30687 system_pods.go:89] "kindnet-47nts" [969ed4f0-c372-4d22-ba84-cfcd5774f1cf] Running
	I0815 23:22:36.742435   30687 system_pods.go:89] "kindnet-jjcdm" [534a226d-c0b6-4a2f-8b2c-27921c9e1aca] Running
	I0815 23:22:36.742441   30687 system_pods.go:89] "kube-apiserver-ha-175414" [74c0c52d-72f6-425e-ba1e-047ebb890ed4] Running
	I0815 23:22:36.742446   30687 system_pods.go:89] "kube-apiserver-ha-175414-m02" [019a6c53-1d80-40a3-93ea-6179c12e17ed] Running
	I0815 23:22:36.742452   30687 system_pods.go:89] "kube-controller-manager-ha-175414" [88aeb420-f593-4e18-8149-6fe48fd85b7d] Running
	I0815 23:22:36.742461   30687 system_pods.go:89] "kube-controller-manager-ha-175414-m02" [be3e762b-556f-4881-9a29-c9a867ccb5e7] Running
	I0815 23:22:36.742469   30687 system_pods.go:89] "kube-proxy-4frcn" [2831334a-a379-4f6d-ada3-53a01fc6f65e] Running
	I0815 23:22:36.742476   30687 system_pods.go:89] "kube-proxy-dcnmc" [572a1e80-23b0-4cb9-bfab-067b6853226d] Running
	I0815 23:22:36.742485   30687 system_pods.go:89] "kube-scheduler-ha-175414" [7463fcbb-2a5f-4101-8b25-f72c74ca515a] Running
	I0815 23:22:36.742494   30687 system_pods.go:89] "kube-scheduler-ha-175414-m02" [1e5715dc-154a-4669-8a4e-986bb989a16b] Running
	I0815 23:22:36.742502   30687 system_pods.go:89] "kube-vip-ha-175414" [6b98571e-8ad5-45e0-acbc-d0e875647a69] Running
	I0815 23:22:36.742507   30687 system_pods.go:89] "kube-vip-ha-175414-m02" [4877d97c-4adb-4ce8-813f-0819e8a96b5a] Running
	I0815 23:22:36.742512   30687 system_pods.go:89] "storage-provisioner" [7042d764-6043-449c-a1e9-aaa28256c579] Running
	I0815 23:22:36.742521   30687 system_pods.go:126] duration metric: took 204.56185ms to wait for k8s-apps to be running ...
	I0815 23:22:36.742534   30687 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 23:22:36.742585   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:22:36.757271   30687 system_svc.go:56] duration metric: took 14.728453ms WaitForService to wait for kubelet
	I0815 23:22:36.757305   30687 kubeadm.go:582] duration metric: took 20.045692436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:22:36.757327   30687 node_conditions.go:102] verifying NodePressure condition ...
	I0815 23:22:36.932664   30687 request.go:632] Waited for 175.26732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I0815 23:22:36.932737   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I0815 23:22:36.932748   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.932757   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.932761   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.936589   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:36.937308   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:22:36.937326   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:22:36.937343   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:22:36.937347   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:22:36.937351   30687 node_conditions.go:105] duration metric: took 180.019245ms to run NodePressure ...
	I0815 23:22:36.937361   30687 start.go:241] waiting for startup goroutines ...
	I0815 23:22:36.937383   30687 start.go:255] writing updated cluster config ...
	I0815 23:22:36.939442   30687 out.go:201] 
	I0815 23:22:36.941076   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:22:36.941201   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:22:36.942865   30687 out.go:177] * Starting "ha-175414-m03" control-plane node in "ha-175414" cluster
	I0815 23:22:36.943930   30687 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:22:36.943954   30687 cache.go:56] Caching tarball of preloaded images
	I0815 23:22:36.944051   30687 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:22:36.944060   30687 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:22:36.944141   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:22:36.944300   30687 start.go:360] acquireMachinesLock for ha-175414-m03: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:22:36.944341   30687 start.go:364] duration metric: took 23.052µs to acquireMachinesLock for "ha-175414-m03"
	I0815 23:22:36.944363   30687 start.go:93] Provisioning new machine with config: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:22:36.944456   30687 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0815 23:22:36.945756   30687 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 23:22:36.945839   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:22:36.945883   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:22:36.960464   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0815 23:22:36.960920   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:22:36.961411   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:22:36.961433   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:22:36.961899   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:22:36.962094   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetMachineName
	I0815 23:22:36.962257   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:22:36.962422   30687 start.go:159] libmachine.API.Create for "ha-175414" (driver="kvm2")
	I0815 23:22:36.962449   30687 client.go:168] LocalClient.Create starting
	I0815 23:22:36.962484   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0815 23:22:36.962527   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:22:36.962545   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:22:36.962607   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0815 23:22:36.962633   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:22:36.962649   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:22:36.962675   30687 main.go:141] libmachine: Running pre-create checks...
	I0815 23:22:36.962686   30687 main.go:141] libmachine: (ha-175414-m03) Calling .PreCreateCheck
	I0815 23:22:36.962859   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetConfigRaw
	I0815 23:22:36.963190   30687 main.go:141] libmachine: Creating machine...
	I0815 23:22:36.963202   30687 main.go:141] libmachine: (ha-175414-m03) Calling .Create
	I0815 23:22:36.963324   30687 main.go:141] libmachine: (ha-175414-m03) Creating KVM machine...
	I0815 23:22:36.964577   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found existing default KVM network
	I0815 23:22:36.964715   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found existing private KVM network mk-ha-175414
	I0815 23:22:36.964846   30687 main.go:141] libmachine: (ha-175414-m03) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03 ...
	I0815 23:22:36.964867   30687 main.go:141] libmachine: (ha-175414-m03) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:22:36.964919   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:36.964843   31431 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:22:36.965050   30687 main.go:141] libmachine: (ha-175414-m03) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 23:22:37.192864   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:37.192752   31431 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa...
	I0815 23:22:37.272364   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:37.272256   31431 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/ha-175414-m03.rawdisk...
	I0815 23:22:37.272395   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Writing magic tar header
	I0815 23:22:37.272406   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Writing SSH key tar header
	I0815 23:22:37.272418   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:37.272367   31431 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03 ...
	I0815 23:22:37.272452   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03
	I0815 23:22:37.272466   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03 (perms=drwx------)
	I0815 23:22:37.272545   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0815 23:22:37.272569   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0815 23:22:37.272579   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:22:37.272594   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0815 23:22:37.272606   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 23:22:37.272617   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins
	I0815 23:22:37.272625   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home
	I0815 23:22:37.272638   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Skipping /home - not owner
	I0815 23:22:37.272674   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0815 23:22:37.272697   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0815 23:22:37.272720   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 23:22:37.272734   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 23:22:37.272747   30687 main.go:141] libmachine: (ha-175414-m03) Creating domain...
	I0815 23:22:37.273666   30687 main.go:141] libmachine: (ha-175414-m03) define libvirt domain using xml: 
	I0815 23:22:37.273681   30687 main.go:141] libmachine: (ha-175414-m03) <domain type='kvm'>
	I0815 23:22:37.273689   30687 main.go:141] libmachine: (ha-175414-m03)   <name>ha-175414-m03</name>
	I0815 23:22:37.273694   30687 main.go:141] libmachine: (ha-175414-m03)   <memory unit='MiB'>2200</memory>
	I0815 23:22:37.273700   30687 main.go:141] libmachine: (ha-175414-m03)   <vcpu>2</vcpu>
	I0815 23:22:37.273705   30687 main.go:141] libmachine: (ha-175414-m03)   <features>
	I0815 23:22:37.273713   30687 main.go:141] libmachine: (ha-175414-m03)     <acpi/>
	I0815 23:22:37.273724   30687 main.go:141] libmachine: (ha-175414-m03)     <apic/>
	I0815 23:22:37.273732   30687 main.go:141] libmachine: (ha-175414-m03)     <pae/>
	I0815 23:22:37.273741   30687 main.go:141] libmachine: (ha-175414-m03)     
	I0815 23:22:37.273782   30687 main.go:141] libmachine: (ha-175414-m03)   </features>
	I0815 23:22:37.273808   30687 main.go:141] libmachine: (ha-175414-m03)   <cpu mode='host-passthrough'>
	I0815 23:22:37.273818   30687 main.go:141] libmachine: (ha-175414-m03)   
	I0815 23:22:37.273832   30687 main.go:141] libmachine: (ha-175414-m03)   </cpu>
	I0815 23:22:37.273856   30687 main.go:141] libmachine: (ha-175414-m03)   <os>
	I0815 23:22:37.273868   30687 main.go:141] libmachine: (ha-175414-m03)     <type>hvm</type>
	I0815 23:22:37.273880   30687 main.go:141] libmachine: (ha-175414-m03)     <boot dev='cdrom'/>
	I0815 23:22:37.273886   30687 main.go:141] libmachine: (ha-175414-m03)     <boot dev='hd'/>
	I0815 23:22:37.273901   30687 main.go:141] libmachine: (ha-175414-m03)     <bootmenu enable='no'/>
	I0815 23:22:37.273910   30687 main.go:141] libmachine: (ha-175414-m03)   </os>
	I0815 23:22:37.273923   30687 main.go:141] libmachine: (ha-175414-m03)   <devices>
	I0815 23:22:37.273934   30687 main.go:141] libmachine: (ha-175414-m03)     <disk type='file' device='cdrom'>
	I0815 23:22:37.273955   30687 main.go:141] libmachine: (ha-175414-m03)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/boot2docker.iso'/>
	I0815 23:22:37.273978   30687 main.go:141] libmachine: (ha-175414-m03)       <target dev='hdc' bus='scsi'/>
	I0815 23:22:37.273995   30687 main.go:141] libmachine: (ha-175414-m03)       <readonly/>
	I0815 23:22:37.274004   30687 main.go:141] libmachine: (ha-175414-m03)     </disk>
	I0815 23:22:37.274015   30687 main.go:141] libmachine: (ha-175414-m03)     <disk type='file' device='disk'>
	I0815 23:22:37.274035   30687 main.go:141] libmachine: (ha-175414-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 23:22:37.274050   30687 main.go:141] libmachine: (ha-175414-m03)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/ha-175414-m03.rawdisk'/>
	I0815 23:22:37.274063   30687 main.go:141] libmachine: (ha-175414-m03)       <target dev='hda' bus='virtio'/>
	I0815 23:22:37.274073   30687 main.go:141] libmachine: (ha-175414-m03)     </disk>
	I0815 23:22:37.274082   30687 main.go:141] libmachine: (ha-175414-m03)     <interface type='network'>
	I0815 23:22:37.274095   30687 main.go:141] libmachine: (ha-175414-m03)       <source network='mk-ha-175414'/>
	I0815 23:22:37.274118   30687 main.go:141] libmachine: (ha-175414-m03)       <model type='virtio'/>
	I0815 23:22:37.274137   30687 main.go:141] libmachine: (ha-175414-m03)     </interface>
	I0815 23:22:37.274151   30687 main.go:141] libmachine: (ha-175414-m03)     <interface type='network'>
	I0815 23:22:37.274164   30687 main.go:141] libmachine: (ha-175414-m03)       <source network='default'/>
	I0815 23:22:37.274177   30687 main.go:141] libmachine: (ha-175414-m03)       <model type='virtio'/>
	I0815 23:22:37.274187   30687 main.go:141] libmachine: (ha-175414-m03)     </interface>
	I0815 23:22:37.274199   30687 main.go:141] libmachine: (ha-175414-m03)     <serial type='pty'>
	I0815 23:22:37.274214   30687 main.go:141] libmachine: (ha-175414-m03)       <target port='0'/>
	I0815 23:22:37.274226   30687 main.go:141] libmachine: (ha-175414-m03)     </serial>
	I0815 23:22:37.274237   30687 main.go:141] libmachine: (ha-175414-m03)     <console type='pty'>
	I0815 23:22:37.274251   30687 main.go:141] libmachine: (ha-175414-m03)       <target type='serial' port='0'/>
	I0815 23:22:37.274262   30687 main.go:141] libmachine: (ha-175414-m03)     </console>
	I0815 23:22:37.274275   30687 main.go:141] libmachine: (ha-175414-m03)     <rng model='virtio'>
	I0815 23:22:37.274292   30687 main.go:141] libmachine: (ha-175414-m03)       <backend model='random'>/dev/random</backend>
	I0815 23:22:37.274304   30687 main.go:141] libmachine: (ha-175414-m03)     </rng>
	I0815 23:22:37.274313   30687 main.go:141] libmachine: (ha-175414-m03)     
	I0815 23:22:37.274322   30687 main.go:141] libmachine: (ha-175414-m03)     
	I0815 23:22:37.274333   30687 main.go:141] libmachine: (ha-175414-m03)   </devices>
	I0815 23:22:37.274345   30687 main.go:141] libmachine: (ha-175414-m03) </domain>
	I0815 23:22:37.274353   30687 main.go:141] libmachine: (ha-175414-m03) 
	I0815 23:22:37.280800   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:73:cb:49 in network default
	I0815 23:22:37.281372   30687 main.go:141] libmachine: (ha-175414-m03) Ensuring networks are active...
	I0815 23:22:37.281388   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:37.282151   30687 main.go:141] libmachine: (ha-175414-m03) Ensuring network default is active
	I0815 23:22:37.282496   30687 main.go:141] libmachine: (ha-175414-m03) Ensuring network mk-ha-175414 is active
	I0815 23:22:37.282842   30687 main.go:141] libmachine: (ha-175414-m03) Getting domain xml...
	I0815 23:22:37.283465   30687 main.go:141] libmachine: (ha-175414-m03) Creating domain...
	I0815 23:22:38.526952   30687 main.go:141] libmachine: (ha-175414-m03) Waiting to get IP...
	I0815 23:22:38.527758   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:38.528177   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:38.528204   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:38.528142   31431 retry.go:31] will retry after 239.145725ms: waiting for machine to come up
	I0815 23:22:38.768565   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:38.768982   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:38.769008   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:38.768952   31431 retry.go:31] will retry after 385.356446ms: waiting for machine to come up
	I0815 23:22:39.155461   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:39.155832   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:39.155864   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:39.155789   31431 retry.go:31] will retry after 312.62161ms: waiting for machine to come up
	I0815 23:22:39.470250   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:39.470675   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:39.470697   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:39.470643   31431 retry.go:31] will retry after 444.229589ms: waiting for machine to come up
	I0815 23:22:39.916243   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:39.916587   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:39.916613   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:39.916557   31431 retry.go:31] will retry after 620.629364ms: waiting for machine to come up
	I0815 23:22:40.539215   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:40.539587   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:40.539610   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:40.539557   31431 retry.go:31] will retry after 797.102726ms: waiting for machine to come up
	I0815 23:22:41.338452   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:41.338872   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:41.338903   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:41.338819   31431 retry.go:31] will retry after 759.026392ms: waiting for machine to come up
	I0815 23:22:42.099393   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:42.099813   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:42.099868   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:42.099797   31431 retry.go:31] will retry after 1.405444372s: waiting for machine to come up
	I0815 23:22:43.506843   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:43.507282   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:43.507304   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:43.507235   31431 retry.go:31] will retry after 1.309943276s: waiting for machine to come up
	I0815 23:22:44.818216   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:44.818664   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:44.818687   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:44.818630   31431 retry.go:31] will retry after 1.907729069s: waiting for machine to come up
	I0815 23:22:46.728655   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:46.729071   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:46.729096   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:46.729019   31431 retry.go:31] will retry after 1.767034123s: waiting for machine to come up
	I0815 23:22:48.497136   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:48.497534   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:48.497563   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:48.497498   31431 retry.go:31] will retry after 2.658746356s: waiting for machine to come up
	I0815 23:22:51.158963   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:51.159423   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:51.159449   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:51.159378   31431 retry.go:31] will retry after 4.113519624s: waiting for machine to come up
	I0815 23:22:55.274770   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:55.275134   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:55.275156   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:55.275094   31431 retry.go:31] will retry after 3.634365209s: waiting for machine to come up
	I0815 23:22:58.910902   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:58.911318   30687 main.go:141] libmachine: (ha-175414-m03) Found IP for machine: 192.168.39.100
	I0815 23:22:58.911351   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has current primary IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:58.911360   30687 main.go:141] libmachine: (ha-175414-m03) Reserving static IP address...
	I0815 23:22:58.911771   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find host DHCP lease matching {name: "ha-175414-m03", mac: "52:54:00:bc:81:69", ip: "192.168.39.100"} in network mk-ha-175414
	I0815 23:22:58.984399   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Getting to WaitForSSH function...
	I0815 23:22:58.984424   30687 main.go:141] libmachine: (ha-175414-m03) Reserved static IP address: 192.168.39.100
	I0815 23:22:58.984434   30687 main.go:141] libmachine: (ha-175414-m03) Waiting for SSH to be available...
	I0815 23:22:58.987083   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:58.987453   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414
	I0815 23:22:58.987483   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find defined IP address of network mk-ha-175414 interface with MAC address 52:54:00:bc:81:69
	I0815 23:22:58.987587   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using SSH client type: external
	I0815 23:22:58.987615   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa (-rw-------)
	I0815 23:22:58.987647   30687 main.go:141] libmachine: (ha-175414-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:22:58.987665   30687 main.go:141] libmachine: (ha-175414-m03) DBG | About to run SSH command:
	I0815 23:22:58.987680   30687 main.go:141] libmachine: (ha-175414-m03) DBG | exit 0
	I0815 23:22:58.991442   30687 main.go:141] libmachine: (ha-175414-m03) DBG | SSH cmd err, output: exit status 255: 
	I0815 23:22:58.991470   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0815 23:22:58.991482   30687 main.go:141] libmachine: (ha-175414-m03) DBG | command : exit 0
	I0815 23:22:58.991489   30687 main.go:141] libmachine: (ha-175414-m03) DBG | err     : exit status 255
	I0815 23:22:58.991498   30687 main.go:141] libmachine: (ha-175414-m03) DBG | output  : 
	I0815 23:23:01.992175   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Getting to WaitForSSH function...
	I0815 23:23:01.994624   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:01.994987   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:01.995016   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:01.995182   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using SSH client type: external
	I0815 23:23:01.995210   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa (-rw-------)
	I0815 23:23:01.995242   30687 main.go:141] libmachine: (ha-175414-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:23:01.995259   30687 main.go:141] libmachine: (ha-175414-m03) DBG | About to run SSH command:
	I0815 23:23:01.995272   30687 main.go:141] libmachine: (ha-175414-m03) DBG | exit 0
	I0815 23:23:02.118140   30687 main.go:141] libmachine: (ha-175414-m03) DBG | SSH cmd err, output: <nil>: 
	I0815 23:23:02.118385   30687 main.go:141] libmachine: (ha-175414-m03) KVM machine creation complete!
	I0815 23:23:02.118681   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetConfigRaw
	I0815 23:23:02.119178   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:02.119358   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:02.119520   30687 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 23:23:02.119535   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:23:02.120637   30687 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 23:23:02.120649   30687 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 23:23:02.120654   30687 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 23:23:02.120660   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.123775   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.124135   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.124168   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.124321   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.124494   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.124674   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.124825   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.125013   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.125217   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.125230   30687 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 23:23:02.225333   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:23:02.225359   30687 main.go:141] libmachine: Detecting the provisioner...
	I0815 23:23:02.225367   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.228065   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.228446   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.228478   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.228618   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.228825   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.228946   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.229104   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.229228   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.229406   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.229418   30687 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 23:23:02.330731   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 23:23:02.330812   30687 main.go:141] libmachine: found compatible host: buildroot
	I0815 23:23:02.330822   30687 main.go:141] libmachine: Provisioning with buildroot...
	I0815 23:23:02.330833   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetMachineName
	I0815 23:23:02.331140   30687 buildroot.go:166] provisioning hostname "ha-175414-m03"
	I0815 23:23:02.331169   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetMachineName
	I0815 23:23:02.331351   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.334241   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.334719   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.334749   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.334925   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.335106   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.335247   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.335344   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.335520   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.335714   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.335728   30687 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-175414-m03 && echo "ha-175414-m03" | sudo tee /etc/hostname
	I0815 23:23:02.449420   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414-m03
	
	I0815 23:23:02.449447   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.452111   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.452479   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.452510   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.452680   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.452890   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.453043   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.453167   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.453345   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.453513   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.453529   30687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-175414-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-175414-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-175414-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:23:02.563978   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:23:02.564017   30687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:23:02.564041   30687 buildroot.go:174] setting up certificates
	I0815 23:23:02.564052   30687 provision.go:84] configureAuth start
	I0815 23:23:02.564067   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetMachineName
	I0815 23:23:02.564315   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:23:02.567178   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.567502   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.567531   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.567653   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.569617   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.569965   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.569985   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.570137   30687 provision.go:143] copyHostCerts
	I0815 23:23:02.570168   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:23:02.570207   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:23:02.570219   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:23:02.570308   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:23:02.570401   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:23:02.570425   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:23:02.570435   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:23:02.570472   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:23:02.570559   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:23:02.570582   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:23:02.570592   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:23:02.570626   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:23:02.570693   30687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.ha-175414-m03 san=[127.0.0.1 192.168.39.100 ha-175414-m03 localhost minikube]
	I0815 23:23:02.675214   30687 provision.go:177] copyRemoteCerts
	I0815 23:23:02.675265   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:23:02.675287   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.677993   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.678328   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.678359   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.678505   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.678710   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.678893   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.679033   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:23:02.760755   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:23:02.760833   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:23:02.786303   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:23:02.786368   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 23:23:02.811650   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:23:02.811736   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 23:23:02.836691   30687 provision.go:87] duration metric: took 272.627832ms to configureAuth
	I0815 23:23:02.836722   30687 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:23:02.836967   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:23:02.837035   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.839632   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.840085   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.840123   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.840303   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.840494   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.840651   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.840786   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.840978   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.841157   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.841178   30687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:23:03.107147   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:23:03.107176   30687 main.go:141] libmachine: Checking connection to Docker...
	I0815 23:23:03.107185   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetURL
	I0815 23:23:03.108442   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using libvirt version 6000000
	I0815 23:23:03.110717   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.111067   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.111088   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.111217   30687 main.go:141] libmachine: Docker is up and running!
	I0815 23:23:03.111234   30687 main.go:141] libmachine: Reticulating splines...
	I0815 23:23:03.111240   30687 client.go:171] duration metric: took 26.148784091s to LocalClient.Create
	I0815 23:23:03.111265   30687 start.go:167] duration metric: took 26.148842714s to libmachine.API.Create "ha-175414"
	I0815 23:23:03.111276   30687 start.go:293] postStartSetup for "ha-175414-m03" (driver="kvm2")
	I0815 23:23:03.111287   30687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:23:03.111303   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.111538   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:23:03.111566   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:03.113827   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.114157   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.114184   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.114308   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:03.114472   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.114581   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:03.114712   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:23:03.197278   30687 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:23:03.201711   30687 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:23:03.201738   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:23:03.201809   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:23:03.201916   30687 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:23:03.201928   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:23:03.202029   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:23:03.212940   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:23:03.237483   30687 start.go:296] duration metric: took 126.192315ms for postStartSetup
	I0815 23:23:03.237538   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetConfigRaw
	I0815 23:23:03.238123   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:23:03.240597   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.240969   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.241001   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.241259   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:23:03.241452   30687 start.go:128] duration metric: took 26.296987189s to createHost
	I0815 23:23:03.241473   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:03.243730   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.244074   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.244102   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.244303   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:03.244467   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.244578   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.244706   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:03.244839   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:03.244992   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:03.245003   30687 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:23:03.346707   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764183.323470372
	
	I0815 23:23:03.346734   30687 fix.go:216] guest clock: 1723764183.323470372
	I0815 23:23:03.346745   30687 fix.go:229] Guest: 2024-08-15 23:23:03.323470372 +0000 UTC Remote: 2024-08-15 23:23:03.241463342 +0000 UTC m=+144.142728965 (delta=82.00703ms)
	I0815 23:23:03.346766   30687 fix.go:200] guest clock delta is within tolerance: 82.00703ms
	I0815 23:23:03.346778   30687 start.go:83] releasing machines lock for "ha-175414-m03", held for 26.402424779s
	I0815 23:23:03.346804   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.347066   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:23:03.349497   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.349866   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.349894   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.352155   30687 out.go:177] * Found network options:
	I0815 23:23:03.353454   30687 out.go:177]   - NO_PROXY=192.168.39.67,192.168.39.19
	W0815 23:23:03.354600   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 23:23:03.354620   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 23:23:03.354633   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.355155   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.355330   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.355443   30687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:23:03.355480   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	W0815 23:23:03.355564   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 23:23:03.355587   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 23:23:03.355650   30687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:23:03.355667   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:03.358223   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.358485   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.358612   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.358633   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.358803   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:03.358943   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.358970   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.358988   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.359165   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:03.359183   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:03.359409   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.359409   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:23:03.359567   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:03.359722   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:23:03.600165   30687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:23:03.606470   30687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:23:03.606543   30687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:23:03.624369   30687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 23:23:03.624398   30687 start.go:495] detecting cgroup driver to use...
	I0815 23:23:03.624467   30687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:23:03.641972   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:23:03.657096   30687 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:23:03.657151   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:23:03.672682   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:23:03.687557   30687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:23:03.817290   30687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:23:03.966704   30687 docker.go:233] disabling docker service ...
	I0815 23:23:03.966784   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:23:03.982293   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:23:03.996779   30687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:23:04.139971   30687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:23:04.275280   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:23:04.290218   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:23:04.309906   30687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:23:04.309964   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.320966   30687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:23:04.321031   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.332813   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.344559   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.355880   30687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:23:04.367959   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.379727   30687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.397354   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.408561   30687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:23:04.419480   30687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 23:23:04.419547   30687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 23:23:04.435676   30687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:23:04.446099   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:23:04.585087   30687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:23:04.744693   30687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:23:04.744756   30687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:23:04.750939   30687 start.go:563] Will wait 60s for crictl version
	I0815 23:23:04.750998   30687 ssh_runner.go:195] Run: which crictl
	I0815 23:23:04.755210   30687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:23:04.794168   30687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:23:04.794259   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:23:04.823208   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:23:04.853836   30687 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:23:04.855391   30687 out.go:177]   - env NO_PROXY=192.168.39.67
	I0815 23:23:04.856666   30687 out.go:177]   - env NO_PROXY=192.168.39.67,192.168.39.19
	I0815 23:23:04.857885   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:23:04.860408   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:04.860732   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:04.860757   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:04.860934   30687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:23:04.869156   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:23:04.887280   30687 mustload.go:65] Loading cluster: ha-175414
	I0815 23:23:04.887507   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:23:04.887822   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:23:04.887862   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:23:04.903163   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0815 23:23:04.903540   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:23:04.903965   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:23:04.903986   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:23:04.904299   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:23:04.904480   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:23:04.905944   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:23:04.906210   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:23:04.906242   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:23:04.920592   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0815 23:23:04.921008   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:23:04.921478   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:23:04.921500   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:23:04.921791   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:23:04.921982   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:23:04.922134   30687 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414 for IP: 192.168.39.100
	I0815 23:23:04.922146   30687 certs.go:194] generating shared ca certs ...
	I0815 23:23:04.922162   30687 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:23:04.922336   30687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:23:04.922385   30687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:23:04.922398   30687 certs.go:256] generating profile certs ...
	I0815 23:23:04.922492   30687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key
	I0815 23:23:04.922524   30687 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.88ea30ef
	I0815 23:23:04.922544   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.88ea30ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.19 192.168.39.100 192.168.39.254]
	I0815 23:23:05.013221   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.88ea30ef ...
	I0815 23:23:05.013250   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.88ea30ef: {Name:mke9ca6dedb4237b644aef94ccf2d01f0d66f5fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:23:05.013458   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.88ea30ef ...
	I0815 23:23:05.013474   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.88ea30ef: {Name:mkf1272ec8ffcdb7dd347b9fd6444ff28e322e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:23:05.013572   30687 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.88ea30ef -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt
	I0815 23:23:05.013715   30687 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.88ea30ef -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key
	I0815 23:23:05.013910   30687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key
	I0815 23:23:05.013930   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:23:05.013947   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:23:05.013966   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:23:05.013984   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:23:05.014001   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:23:05.014018   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:23:05.014033   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:23:05.014051   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:23:05.014107   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:23:05.014143   30687 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:23:05.014156   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:23:05.014191   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:23:05.014222   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:23:05.014251   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:23:05.014305   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:23:05.014340   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:23:05.014360   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:23:05.014378   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:23:05.014417   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:23:05.017367   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:23:05.017796   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:23:05.017826   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:23:05.018019   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:23:05.018193   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:23:05.018336   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:23:05.018484   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:23:05.094168   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 23:23:05.099428   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 23:23:05.111938   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 23:23:05.116389   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 23:23:05.127334   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 23:23:05.131814   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 23:23:05.143360   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 23:23:05.148404   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 23:23:05.159613   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 23:23:05.165596   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 23:23:05.182986   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 23:23:05.187302   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 23:23:05.198315   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:23:05.224068   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:23:05.250525   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:23:05.278089   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:23:05.306288   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0815 23:23:05.333737   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 23:23:05.358555   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:23:05.385743   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:23:05.411860   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:23:05.438272   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:23:05.462064   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:23:05.487122   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 23:23:05.503694   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 23:23:05.521140   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 23:23:05.538295   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 23:23:05.557736   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 23:23:05.576230   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 23:23:05.594248   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 23:23:05.611911   30687 ssh_runner.go:195] Run: openssl version
	I0815 23:23:05.617774   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:23:05.628998   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:23:05.633451   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:23:05.633510   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:23:05.639819   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:23:05.650555   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:23:05.661350   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:23:05.666039   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:23:05.666096   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:23:05.671902   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:23:05.682898   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:23:05.693953   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:23:05.698591   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:23:05.698637   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:23:05.704367   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:23:05.715644   30687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:23:05.719985   30687 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 23:23:05.720047   30687 kubeadm.go:934] updating node {m03 192.168.39.100 8443 v1.31.0 crio true true} ...
	I0815 23:23:05.720143   30687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-175414-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:23:05.720177   30687 kube-vip.go:115] generating kube-vip config ...
	I0815 23:23:05.720220   30687 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 23:23:05.738353   30687 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 23:23:05.738410   30687 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 23:23:05.738456   30687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:23:05.748502   30687 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 23:23:05.748570   30687 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 23:23:05.759211   30687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 23:23:05.759224   30687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0815 23:23:05.759239   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 23:23:05.759260   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:23:05.759316   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 23:23:05.759215   30687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0815 23:23:05.759365   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 23:23:05.759418   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 23:23:05.763829   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 23:23:05.763856   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 23:23:05.802101   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 23:23:05.802113   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 23:23:05.802158   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 23:23:05.802230   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 23:23:05.864204   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 23:23:05.864247   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 23:23:06.593208   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 23:23:06.603302   30687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 23:23:06.621144   30687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:23:06.639022   30687 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 23:23:06.655857   30687 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 23:23:06.659810   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:23:06.672850   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:23:06.822445   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:23:06.840787   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:23:06.841164   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:23:06.841200   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:23:06.860009   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42183
	I0815 23:23:06.860431   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:23:06.860886   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:23:06.860900   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:23:06.861211   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:23:06.861418   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:23:06.861570   30687 start.go:317] joinCluster: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:23:06.861689   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 23:23:06.861709   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:23:06.864542   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:23:06.864967   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:23:06.864993   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:23:06.865126   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:23:06.865303   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:23:06.865563   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:23:06.865753   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:23:07.015908   30687 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:23:07.015962   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4b87fr.idfvqj3ihtgii9y0 --discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-175414-m03 --control-plane --apiserver-advertise-address=192.168.39.100 --apiserver-bind-port=8443"
	I0815 23:23:30.056290   30687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4b87fr.idfvqj3ihtgii9y0 --discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-175414-m03 --control-plane --apiserver-advertise-address=192.168.39.100 --apiserver-bind-port=8443": (23.040299675s)
	I0815 23:23:30.056328   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 23:23:30.701633   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-175414-m03 minikube.k8s.io/updated_at=2024_08_15T23_23_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=ha-175414 minikube.k8s.io/primary=false
	I0815 23:23:30.855501   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-175414-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 23:23:30.992421   30687 start.go:319] duration metric: took 24.13084471s to joinCluster
	I0815 23:23:30.992489   30687 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:23:30.992862   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:23:30.994013   30687 out.go:177] * Verifying Kubernetes components...
	I0815 23:23:30.995518   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:23:31.248956   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:23:31.299741   30687 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:23:31.300067   30687 kapi.go:59] client config for ha-175414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key", CAFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 23:23:31.300137   30687 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I0815 23:23:31.300429   30687 node_ready.go:35] waiting up to 6m0s for node "ha-175414-m03" to be "Ready" ...
	I0815 23:23:31.300512   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:31.300522   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:31.300533   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:31.300541   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:31.304697   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:31.800646   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:31.800673   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:31.800684   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:31.800690   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:31.821927   30687 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0815 23:23:32.300818   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:32.300841   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:32.300851   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:32.300855   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:32.304922   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:32.800861   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:32.800887   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:32.800899   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:32.800905   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:32.805086   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:33.301448   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:33.301474   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:33.301486   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:33.301491   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:33.305342   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:33.306020   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:33.801280   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:33.801311   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:33.801328   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:33.801332   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:33.804915   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:34.301005   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:34.301028   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:34.301038   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:34.301042   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:34.304808   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:34.801270   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:34.801294   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:34.801302   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:34.801306   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:34.804891   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:35.300789   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:35.300826   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:35.300836   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:35.300842   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:35.304772   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:35.800882   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:35.800904   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:35.800912   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:35.800916   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:35.804995   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:35.805749   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:36.301069   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:36.301096   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:36.301107   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:36.301112   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:36.304719   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:36.801592   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:36.801614   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:36.801639   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:36.801645   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:36.808498   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:23:37.301354   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:37.301377   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:37.301384   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:37.301388   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:37.304801   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:37.801034   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:37.801060   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:37.801071   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:37.801076   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:37.804352   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:38.301589   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:38.301612   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:38.301620   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:38.301625   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:38.304817   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:38.305393   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:38.800761   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:38.800781   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:38.800790   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:38.800797   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:38.804278   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:39.301485   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:39.301503   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:39.301510   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:39.301515   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:39.305090   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:39.801519   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:39.801547   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:39.801557   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:39.801562   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:39.805725   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:40.300746   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:40.300783   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:40.300795   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:40.300800   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:40.304408   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:40.801407   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:40.801430   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:40.801439   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:40.801442   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:40.804765   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:40.805875   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:41.301344   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:41.301366   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:41.301374   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:41.301378   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:41.305023   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:41.801343   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:41.801366   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:41.801374   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:41.801378   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:41.804510   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:42.300635   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:42.300657   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:42.300669   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:42.300675   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:42.308550   30687 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 23:23:42.800687   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:42.800706   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:42.800715   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:42.800719   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:42.806893   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:23:42.807521   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:43.300678   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:43.300704   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:43.300712   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:43.300717   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:43.304518   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:43.800646   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:43.800664   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:43.800675   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:43.800681   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:43.804280   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:44.301629   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:44.301652   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.301662   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.301667   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.304967   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:44.800870   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:44.800891   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.800899   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.800904   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.805708   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:44.806854   30687 node_ready.go:49] node "ha-175414-m03" has status "Ready":"True"
	I0815 23:23:44.806871   30687 node_ready.go:38] duration metric: took 13.506426047s for node "ha-175414-m03" to be "Ready" ...
	I0815 23:23:44.806879   30687 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:23:44.806963   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:44.806974   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.806981   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.806985   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.813632   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:23:44.824518   30687 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.824604   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vkm5s
	I0815 23:23:44.824616   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.824626   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.824634   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.829322   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:44.830013   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:44.830029   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.830037   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.830046   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.832511   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.833213   30687 pod_ready.go:93] pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:44.833231   30687 pod_ready.go:82] duration metric: took 8.687111ms for pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.833240   30687 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.833288   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zrv4c
	I0815 23:23:44.833296   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.833303   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.833307   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.835836   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.836440   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:44.836454   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.836464   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.836469   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.838784   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.839261   30687 pod_ready.go:93] pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:44.839277   30687 pod_ready.go:82] duration metric: took 6.030455ms for pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.839287   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.839338   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414
	I0815 23:23:44.839347   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.839357   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.839364   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.841589   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.842021   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:44.842036   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.842053   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.842060   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.844617   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.845030   30687 pod_ready.go:93] pod "etcd-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:44.845049   30687 pod_ready.go:82] duration metric: took 5.755224ms for pod "etcd-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.845057   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.845107   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414-m02
	I0815 23:23:44.845115   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.845121   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.845125   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.847644   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.848244   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:44.848256   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.848263   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.848267   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.852966   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:44.853609   30687 pod_ready.go:93] pod "etcd-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:44.853624   30687 pod_ready.go:82] duration metric: took 8.561513ms for pod "etcd-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.853633   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.001545   30687 request.go:632] Waited for 147.837871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414-m03
	I0815 23:23:45.001611   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414-m03
	I0815 23:23:45.001624   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.001638   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.001648   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.005990   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:45.200923   30687 request.go:632] Waited for 194.292719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:45.200975   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:45.200980   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.200988   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.200991   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.204867   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:45.205962   30687 pod_ready.go:93] pod "etcd-ha-175414-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:45.205980   30687 pod_ready.go:82] duration metric: took 352.340987ms for pod "etcd-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.205996   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.401080   30687 request.go:632] Waited for 195.010527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414
	I0815 23:23:45.401134   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414
	I0815 23:23:45.401143   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.401153   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.401162   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.405069   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:45.600978   30687 request.go:632] Waited for 194.997854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:45.601036   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:45.601047   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.601058   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.601065   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.604238   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:45.604794   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:45.604813   30687 pod_ready.go:82] duration metric: took 398.811839ms for pod "kube-apiserver-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.604822   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.800898   30687 request.go:632] Waited for 195.997321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m02
	I0815 23:23:45.800956   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m02
	I0815 23:23:45.800964   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.800975   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.800982   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.805329   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:46.001563   30687 request.go:632] Waited for 195.379594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:46.001656   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:46.001669   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.001679   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.001689   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.005268   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.005756   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:46.005778   30687 pod_ready.go:82] duration metric: took 400.948427ms for pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.005790   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.200895   30687 request.go:632] Waited for 195.01624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m03
	I0815 23:23:46.200955   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m03
	I0815 23:23:46.200960   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.200970   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.200976   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.204629   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.401150   30687 request.go:632] Waited for 195.373693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:46.401206   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:46.401211   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.401230   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.401234   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.404647   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.405529   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:46.405547   30687 pod_ready.go:82] duration metric: took 399.747287ms for pod "kube-apiserver-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.405557   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.601364   30687 request.go:632] Waited for 195.751664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414
	I0815 23:23:46.601424   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414
	I0815 23:23:46.601445   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.601460   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.601465   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.605197   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.801302   30687 request.go:632] Waited for 195.345088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:46.801352   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:46.801357   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.801364   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.801368   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.804720   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.805266   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:46.805285   30687 pod_ready.go:82] duration metric: took 399.721484ms for pod "kube-controller-manager-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.805294   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.001221   30687 request.go:632] Waited for 195.863944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m02
	I0815 23:23:47.001305   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m02
	I0815 23:23:47.001315   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.001325   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.001335   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.005415   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:47.201592   30687 request.go:632] Waited for 195.358667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:47.201666   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:47.201673   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.201682   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.201690   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.205325   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:47.205833   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:47.205870   30687 pod_ready.go:82] duration metric: took 400.568411ms for pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.205884   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.401823   30687 request.go:632] Waited for 195.870502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m03
	I0815 23:23:47.401909   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m03
	I0815 23:23:47.401915   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.401922   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.401928   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.405203   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:47.601373   30687 request.go:632] Waited for 195.370984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:47.601443   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:47.601451   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.601461   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.601468   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.604549   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:47.605090   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:47.605115   30687 pod_ready.go:82] duration metric: took 399.218678ms for pod "kube-controller-manager-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.605127   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4frcn" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.801148   30687 request.go:632] Waited for 195.940242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frcn
	I0815 23:23:47.801215   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frcn
	I0815 23:23:47.801220   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.801228   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.801233   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.805182   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.001380   30687 request.go:632] Waited for 195.387295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:48.001450   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:48.001457   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.001465   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.001471   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.004387   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:48.004874   30687 pod_ready.go:93] pod "kube-proxy-4frcn" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:48.004900   30687 pod_ready.go:82] duration metric: took 399.761857ms for pod "kube-proxy-4frcn" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.004909   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dcnmc" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.200924   30687 request.go:632] Waited for 195.916214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcnmc
	I0815 23:23:48.200988   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcnmc
	I0815 23:23:48.200995   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.201004   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.201010   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.204684   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.400864   30687 request.go:632] Waited for 195.278732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:48.400912   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:48.400917   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.400924   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.400928   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.404359   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.405073   30687 pod_ready.go:93] pod "kube-proxy-dcnmc" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:48.405091   30687 pod_ready.go:82] duration metric: took 400.176798ms for pod "kube-proxy-dcnmc" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.405100   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtps7" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.601203   30687 request.go:632] Waited for 196.039174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtps7
	I0815 23:23:48.601263   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtps7
	I0815 23:23:48.601268   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.601276   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.601283   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.604652   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.800831   30687 request.go:632] Waited for 195.271767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:48.800905   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:48.800912   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.800921   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.800929   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.804469   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.805110   30687 pod_ready.go:93] pod "kube-proxy-qtps7" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:48.805127   30687 pod_ready.go:82] duration metric: took 400.021436ms for pod "kube-proxy-qtps7" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.805135   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.001327   30687 request.go:632] Waited for 196.131395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414
	I0815 23:23:49.001419   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414
	I0815 23:23:49.001429   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.001437   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.001441   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.005164   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:49.201496   30687 request.go:632] Waited for 195.769695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:49.201579   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:49.201587   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.201598   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.201605   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.204930   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:49.205615   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:49.205641   30687 pod_ready.go:82] duration metric: took 400.498233ms for pod "kube-scheduler-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.205653   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.400850   30687 request.go:632] Waited for 195.133191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m02
	I0815 23:23:49.400934   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m02
	I0815 23:23:49.400947   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.400958   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.400963   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.404499   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:49.601511   30687 request.go:632] Waited for 196.355466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:49.601600   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:49.601610   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.601622   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.601632   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.605256   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:49.605716   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:49.605734   30687 pod_ready.go:82] duration metric: took 400.074118ms for pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.605744   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.801791   30687 request.go:632] Waited for 195.986943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m03
	I0815 23:23:49.801898   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m03
	I0815 23:23:49.801911   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.801921   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.801927   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.805859   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:50.001482   30687 request.go:632] Waited for 194.855782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:50.001552   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:50.001559   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.001570   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.001579   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.004961   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:50.006147   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:50.006169   30687 pod_ready.go:82] duration metric: took 400.418594ms for pod "kube-scheduler-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:50.006184   30687 pod_ready.go:39] duration metric: took 5.199294359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:23:50.006204   30687 api_server.go:52] waiting for apiserver process to appear ...
	I0815 23:23:50.006268   30687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:23:50.022008   30687 api_server.go:72] duration metric: took 19.029466222s to wait for apiserver process to appear ...
	I0815 23:23:50.022041   30687 api_server.go:88] waiting for apiserver healthz status ...
	I0815 23:23:50.022061   30687 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0815 23:23:50.026169   30687 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0815 23:23:50.026240   30687 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I0815 23:23:50.026249   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.026257   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.026261   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.026974   30687 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 23:23:50.027131   30687 api_server.go:141] control plane version: v1.31.0
	I0815 23:23:50.027149   30687 api_server.go:131] duration metric: took 5.102316ms to wait for apiserver health ...
	I0815 23:23:50.027156   30687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 23:23:50.201552   30687 request.go:632] Waited for 174.330625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:50.201608   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:50.201614   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.201622   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.201626   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.207806   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:23:50.216059   30687 system_pods.go:59] 24 kube-system pods found
	I0815 23:23:50.216091   30687 system_pods.go:61] "coredns-6f6b679f8f-vkm5s" [1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c] Running
	I0815 23:23:50.216098   30687 system_pods.go:61] "coredns-6f6b679f8f-zrv4c" [97d399d0-871e-4e59-8c4d-093b5a29a107] Running
	I0815 23:23:50.216104   30687 system_pods.go:61] "etcd-ha-175414" [8358595a-b7fc-40b0-b3a1-8bce46f618dd] Running
	I0815 23:23:50.216108   30687 system_pods.go:61] "etcd-ha-175414-m02" [fd9e81e9-bfd2-4040-9425-06a84b9c3dda] Running
	I0815 23:23:50.216114   30687 system_pods.go:61] "etcd-ha-175414-m03" [38df15d2-57c3-4c67-ac95-fee5aa93ec03] Running
	I0815 23:23:50.216119   30687 system_pods.go:61] "kindnet-47nts" [969ed4f0-c372-4d22-ba84-cfcd5774f1cf] Running
	I0815 23:23:50.216123   30687 system_pods.go:61] "kindnet-fp2gc" [b52bd53f-e131-4859-9825-3596c8dbab8f] Running
	I0815 23:23:50.216129   30687 system_pods.go:61] "kindnet-jjcdm" [534a226d-c0b6-4a2f-8b2c-27921c9e1aca] Running
	I0815 23:23:50.216134   30687 system_pods.go:61] "kube-apiserver-ha-175414" [74c0c52d-72f6-425e-ba1e-047ebb890ed4] Running
	I0815 23:23:50.216140   30687 system_pods.go:61] "kube-apiserver-ha-175414-m02" [019a6c53-1d80-40a3-93ea-6179c12e17ed] Running
	I0815 23:23:50.216147   30687 system_pods.go:61] "kube-apiserver-ha-175414-m03" [26088bb4-d35b-41a0-9eb0-688801e214fd] Running
	I0815 23:23:50.216154   30687 system_pods.go:61] "kube-controller-manager-ha-175414" [88aeb420-f593-4e18-8149-6fe48fd85b7d] Running
	I0815 23:23:50.216163   30687 system_pods.go:61] "kube-controller-manager-ha-175414-m02" [be3e762b-556f-4881-9a29-c9a867ccb5e7] Running
	I0815 23:23:50.216170   30687 system_pods.go:61] "kube-controller-manager-ha-175414-m03" [a6b31b93-6048-43ea-8e33-e33fb2eeaf43] Running
	I0815 23:23:50.216175   30687 system_pods.go:61] "kube-proxy-4frcn" [2831334a-a379-4f6d-ada3-53a01fc6f65e] Running
	I0815 23:23:50.216182   30687 system_pods.go:61] "kube-proxy-dcnmc" [572a1e80-23b0-4cb9-bfab-067b6853226d] Running
	I0815 23:23:50.216190   30687 system_pods.go:61] "kube-proxy-qtps7" [c5b0adc1-50ae-4b09-8704-1449c241d874] Running
	I0815 23:23:50.216195   30687 system_pods.go:61] "kube-scheduler-ha-175414" [7463fcbb-2a5f-4101-8b25-f72c74ca515a] Running
	I0815 23:23:50.216205   30687 system_pods.go:61] "kube-scheduler-ha-175414-m02" [1e5715dc-154a-4669-8a4e-986bb989a16b] Running
	I0815 23:23:50.216213   30687 system_pods.go:61] "kube-scheduler-ha-175414-m03" [06298593-3572-4444-a52c-1594e3a4ab79] Running
	I0815 23:23:50.216218   30687 system_pods.go:61] "kube-vip-ha-175414" [6b98571e-8ad5-45e0-acbc-d0e875647a69] Running
	I0815 23:23:50.216226   30687 system_pods.go:61] "kube-vip-ha-175414-m02" [4877d97c-4adb-4ce8-813f-0819e8a96b5a] Running
	I0815 23:23:50.216230   30687 system_pods.go:61] "kube-vip-ha-175414-m03" [40f35284-b260-46c5-9766-d8a59b5a80cc] Running
	I0815 23:23:50.216235   30687 system_pods.go:61] "storage-provisioner" [7042d764-6043-449c-a1e9-aaa28256c579] Running
	I0815 23:23:50.216245   30687 system_pods.go:74] duration metric: took 189.083233ms to wait for pod list to return data ...
	I0815 23:23:50.216258   30687 default_sa.go:34] waiting for default service account to be created ...
	I0815 23:23:50.401690   30687 request.go:632] Waited for 185.360404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I0815 23:23:50.401741   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I0815 23:23:50.401746   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.401753   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.401756   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.405572   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:50.405677   30687 default_sa.go:45] found service account: "default"
	I0815 23:23:50.405690   30687 default_sa.go:55] duration metric: took 189.426177ms for default service account to be created ...
	I0815 23:23:50.405700   30687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 23:23:50.600989   30687 request.go:632] Waited for 195.210751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:50.601046   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:50.601051   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.601058   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.601062   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.606926   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:23:50.613402   30687 system_pods.go:86] 24 kube-system pods found
	I0815 23:23:50.613430   30687 system_pods.go:89] "coredns-6f6b679f8f-vkm5s" [1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c] Running
	I0815 23:23:50.613436   30687 system_pods.go:89] "coredns-6f6b679f8f-zrv4c" [97d399d0-871e-4e59-8c4d-093b5a29a107] Running
	I0815 23:23:50.613441   30687 system_pods.go:89] "etcd-ha-175414" [8358595a-b7fc-40b0-b3a1-8bce46f618dd] Running
	I0815 23:23:50.613446   30687 system_pods.go:89] "etcd-ha-175414-m02" [fd9e81e9-bfd2-4040-9425-06a84b9c3dda] Running
	I0815 23:23:50.613450   30687 system_pods.go:89] "etcd-ha-175414-m03" [38df15d2-57c3-4c67-ac95-fee5aa93ec03] Running
	I0815 23:23:50.613453   30687 system_pods.go:89] "kindnet-47nts" [969ed4f0-c372-4d22-ba84-cfcd5774f1cf] Running
	I0815 23:23:50.613458   30687 system_pods.go:89] "kindnet-fp2gc" [b52bd53f-e131-4859-9825-3596c8dbab8f] Running
	I0815 23:23:50.613464   30687 system_pods.go:89] "kindnet-jjcdm" [534a226d-c0b6-4a2f-8b2c-27921c9e1aca] Running
	I0815 23:23:50.613469   30687 system_pods.go:89] "kube-apiserver-ha-175414" [74c0c52d-72f6-425e-ba1e-047ebb890ed4] Running
	I0815 23:23:50.613475   30687 system_pods.go:89] "kube-apiserver-ha-175414-m02" [019a6c53-1d80-40a3-93ea-6179c12e17ed] Running
	I0815 23:23:50.613480   30687 system_pods.go:89] "kube-apiserver-ha-175414-m03" [26088bb4-d35b-41a0-9eb0-688801e214fd] Running
	I0815 23:23:50.613487   30687 system_pods.go:89] "kube-controller-manager-ha-175414" [88aeb420-f593-4e18-8149-6fe48fd85b7d] Running
	I0815 23:23:50.613496   30687 system_pods.go:89] "kube-controller-manager-ha-175414-m02" [be3e762b-556f-4881-9a29-c9a867ccb5e7] Running
	I0815 23:23:50.613502   30687 system_pods.go:89] "kube-controller-manager-ha-175414-m03" [a6b31b93-6048-43ea-8e33-e33fb2eeaf43] Running
	I0815 23:23:50.613510   30687 system_pods.go:89] "kube-proxy-4frcn" [2831334a-a379-4f6d-ada3-53a01fc6f65e] Running
	I0815 23:23:50.613514   30687 system_pods.go:89] "kube-proxy-dcnmc" [572a1e80-23b0-4cb9-bfab-067b6853226d] Running
	I0815 23:23:50.613518   30687 system_pods.go:89] "kube-proxy-qtps7" [c5b0adc1-50ae-4b09-8704-1449c241d874] Running
	I0815 23:23:50.613521   30687 system_pods.go:89] "kube-scheduler-ha-175414" [7463fcbb-2a5f-4101-8b25-f72c74ca515a] Running
	I0815 23:23:50.613525   30687 system_pods.go:89] "kube-scheduler-ha-175414-m02" [1e5715dc-154a-4669-8a4e-986bb989a16b] Running
	I0815 23:23:50.613528   30687 system_pods.go:89] "kube-scheduler-ha-175414-m03" [06298593-3572-4444-a52c-1594e3a4ab79] Running
	I0815 23:23:50.613532   30687 system_pods.go:89] "kube-vip-ha-175414" [6b98571e-8ad5-45e0-acbc-d0e875647a69] Running
	I0815 23:23:50.613537   30687 system_pods.go:89] "kube-vip-ha-175414-m02" [4877d97c-4adb-4ce8-813f-0819e8a96b5a] Running
	I0815 23:23:50.613540   30687 system_pods.go:89] "kube-vip-ha-175414-m03" [40f35284-b260-46c5-9766-d8a59b5a80cc] Running
	I0815 23:23:50.613543   30687 system_pods.go:89] "storage-provisioner" [7042d764-6043-449c-a1e9-aaa28256c579] Running
	I0815 23:23:50.613549   30687 system_pods.go:126] duration metric: took 207.843363ms to wait for k8s-apps to be running ...
	I0815 23:23:50.613558   30687 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 23:23:50.613611   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:23:50.629314   30687 system_svc.go:56] duration metric: took 15.74754ms WaitForService to wait for kubelet
	I0815 23:23:50.629344   30687 kubeadm.go:582] duration metric: took 19.636826655s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:23:50.629364   30687 node_conditions.go:102] verifying NodePressure condition ...
	I0815 23:23:50.801775   30687 request.go:632] Waited for 172.327841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I0815 23:23:50.801855   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I0815 23:23:50.801863   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.801874   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.801883   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.805163   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:50.806331   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:23:50.806355   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:23:50.806367   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:23:50.806373   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:23:50.806379   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:23:50.806385   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:23:50.806394   30687 node_conditions.go:105] duration metric: took 177.024539ms to run NodePressure ...
	I0815 23:23:50.806412   30687 start.go:241] waiting for startup goroutines ...
	I0815 23:23:50.806440   30687 start.go:255] writing updated cluster config ...
	I0815 23:23:50.806880   30687 ssh_runner.go:195] Run: rm -f paused
	I0815 23:23:50.862887   30687 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 23:23:50.864906   30687 out.go:177] * Done! kubectl is now configured to use "ha-175414" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.321544615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764448321516597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7763d171-c95f-4519-9ec8-6c1538051e3b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.322020206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1d9e176-fe28-4344-b4fe-94fdd5f58847 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.322068815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1d9e176-fe28-4344-b4fe-94fdd5f58847 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.322364369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764234620075693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100474735774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64,PodSandboxId:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100385963377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e,PodSandboxId:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723764100321406097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723764088513443509,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172376408
6148992845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55,PodSandboxId:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172376407625
7018114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764074424182895,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764074344815634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0,PodSandboxId:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764074281578454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8,PodSandboxId:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764074310537239,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1d9e176-fe28-4344-b4fe-94fdd5f58847 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.359080412Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36bea597-642b-4d27-8951-3351ef8a7e24 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.359175188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36bea597-642b-4d27-8951-3351ef8a7e24 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.360687810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01f94c2c-dd6d-48f2-a82d-8694cb1737e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.361161893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764448361133216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01f94c2c-dd6d-48f2-a82d-8694cb1737e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.361746029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64d61600-97c8-4c93-a472-60d964699ec2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.361814134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64d61600-97c8-4c93-a472-60d964699ec2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.362037035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764234620075693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100474735774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64,PodSandboxId:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100385963377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e,PodSandboxId:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723764100321406097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723764088513443509,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172376408
6148992845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55,PodSandboxId:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172376407625
7018114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764074424182895,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764074344815634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0,PodSandboxId:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764074281578454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8,PodSandboxId:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764074310537239,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64d61600-97c8-4c93-a472-60d964699ec2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.399117048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1267ce9-c9eb-4a68-b222-b67ce3225fd8 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.399209988Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1267ce9-c9eb-4a68-b222-b67ce3225fd8 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.400773045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7164e7ee-05e6-4c93-8b48-db2ae12759c8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.402648667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764448402616254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7164e7ee-05e6-4c93-8b48-db2ae12759c8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.403402265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c38c5839-6aad-46ba-a607-de7e806ec83a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.403479627Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c38c5839-6aad-46ba-a607-de7e806ec83a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.403702380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764234620075693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100474735774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64,PodSandboxId:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100385963377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e,PodSandboxId:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723764100321406097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723764088513443509,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172376408
6148992845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55,PodSandboxId:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172376407625
7018114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764074424182895,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764074344815634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0,PodSandboxId:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764074281578454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8,PodSandboxId:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764074310537239,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c38c5839-6aad-46ba-a607-de7e806ec83a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.443070505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff07e8d9-6bfd-4384-a23e-c0efa9c2abd5 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.443165489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff07e8d9-6bfd-4384-a23e-c0efa9c2abd5 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.444719340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fcf9a7a4-9893-4413-9d21-ce01da82e283 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.445166490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764448445143141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcf9a7a4-9893-4413-9d21-ce01da82e283 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.446187517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c664b6b-43a0-452a-a871-07e0ab7ebf6b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.446298312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c664b6b-43a0-452a-a871-07e0ab7ebf6b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:27:28 ha-175414 crio[681]: time="2024-08-15 23:27:28.446526682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764234620075693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100474735774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64,PodSandboxId:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100385963377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e,PodSandboxId:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723764100321406097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723764088513443509,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172376408
6148992845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55,PodSandboxId:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172376407625
7018114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764074424182895,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764074344815634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0,PodSandboxId:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764074281578454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8,PodSandboxId:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764074310537239,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c664b6b-43a0-452a-a871-07e0ab7ebf6b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6f2ac1a3791a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1555ba5313b4a       busybox-7dff88458-ztvms
	d266fdeedd2d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   33df4c1e88a57       coredns-6f6b679f8f-vkm5s
	6bdc1076f0d11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   0f2dc7e79b3c7       coredns-6f6b679f8f-zrv4c
	fd145e0bce0eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   4c614a1c6c9de       storage-provisioner
	dce83cbb20557       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   1392391da1090       kindnet-jjcdm
	70eb25dbc5fac       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   51e2286f4b6df       kube-proxy-4frcn
	41980bfc0d44a       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   34a71387942ef       kube-vip-ha-175414
	aaba7057e0920       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   94e761b5a2dbf       etcd-ha-175414
	af5abf6569d1f       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   6bc6e4c03eedb       kube-scheduler-ha-175414
	b61812e4ed00f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   6b83d3bb335b6       kube-apiserver-ha-175414
	0f0f5c055e67f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   15475f8def71f       kube-controller-manager-ha-175414
	
	
	==> coredns [6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64] <==
	[INFO] 10.244.2.2:42343 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003476687s
	[INFO] 10.244.2.2:34294 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204037s
	[INFO] 10.244.2.2:41230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132845s
	[INFO] 10.244.1.2:43940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132764s
	[INFO] 10.244.1.2:35236 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096436s
	[INFO] 10.244.1.2:41499 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127607s
	[INFO] 10.244.1.2:55520 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076785s
	[INFO] 10.244.1.2:46694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099473s
	[INFO] 10.244.0.4:47376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152741s
	[INFO] 10.244.0.4:38412 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001860253s
	[INFO] 10.244.0.4:37064 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000527s
	[INFO] 10.244.0.4:57092 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096595s
	[INFO] 10.244.0.4:44776 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060092s
	[INFO] 10.244.0.4:49265 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034776s
	[INFO] 10.244.2.2:56855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153031s
	[INFO] 10.244.2.2:56811 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148425s
	[INFO] 10.244.2.2:56795 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112285s
	[INFO] 10.244.2.2:33122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109125s
	[INFO] 10.244.1.2:53479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203125s
	[INFO] 10.244.0.4:39088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127065s
	[INFO] 10.244.0.4:44479 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007416s
	[INFO] 10.244.2.2:38995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210639s
	[INFO] 10.244.2.2:51708 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000191376s
	[INFO] 10.244.1.2:46430 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129937s
	[INFO] 10.244.1.2:41358 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094083s
	
	
	==> coredns [d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38456 - 3166 "HINFO IN 1280106060145409119.2838945066204880542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009788563s
	[INFO] 10.244.2.2:43352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000332013s
	[INFO] 10.244.2.2:55356 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230082s
	[INFO] 10.244.2.2:53708 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003726881s
	[INFO] 10.244.2.2:42627 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166307s
	[INFO] 10.244.2.2:37289 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162629s
	[INFO] 10.244.1.2:51252 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001943848s
	[INFO] 10.244.1.2:54890 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100499s
	[INFO] 10.244.1.2:34298 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001419075s
	[INFO] 10.244.0.4:33304 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001325515s
	[INFO] 10.244.0.4:42189 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073238s
	[INFO] 10.244.1.2:35312 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127561s
	[INFO] 10.244.1.2:42713 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174951s
	[INFO] 10.244.1.2:32898 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119329s
	[INFO] 10.244.0.4:58944 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116555s
	[INFO] 10.244.0.4:59435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073012s
	[INFO] 10.244.2.2:60026 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235829s
	[INFO] 10.244.2.2:58530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018432s
	[INFO] 10.244.1.2:44913 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119773s
	[INFO] 10.244.1.2:52756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123167s
	[INFO] 10.244.0.4:39480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124675s
	[INFO] 10.244.0.4:51365 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114789s
	[INFO] 10.244.0.4:49967 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068329s
	[INFO] 10.244.0.4:42637 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073642s
	
	
	==> describe nodes <==
	Name:               ha-175414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T23_21_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:27:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:24:24 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:24:24 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:24:24 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:24:24 +0000   Thu, 15 Aug 2024 23:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-175414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b0ddee9ca5943d7802a25ee6a9c7f34
	  System UUID:                7b0ddee9-ca59-43d7-802a-25ee6a9c7f34
	  Boot ID:                    a257efb5-ad21-419a-b259-592d48073d80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ztvms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-6f6b679f8f-vkm5s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m3s
	  kube-system                 coredns-6f6b679f8f-zrv4c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m3s
	  kube-system                 etcd-ha-175414                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m8s
	  kube-system                 kindnet-jjcdm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m3s
	  kube-system                 kube-apiserver-ha-175414             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-ha-175414    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-proxy-4frcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-ha-175414             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-175414                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m2s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node ha-175414 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node ha-175414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s (x8 over 6m15s)  kubelet          Node ha-175414 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m8s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s                   kubelet          Node ha-175414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s                   kubelet          Node ha-175414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s                   kubelet          Node ha-175414 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m4s                   node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal  NodeReady                5m49s                  kubelet          Node ha-175414 status is now: NodeReady
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	
	
	Name:               ha-175414-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_22_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:22:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:25:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 23:24:16 +0000   Thu, 15 Aug 2024 23:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 23:24:16 +0000   Thu, 15 Aug 2024 23:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 23:24:16 +0000   Thu, 15 Aug 2024 23:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 23:24:16 +0000   Thu, 15 Aug 2024 23:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-175414-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e48881ea1334f28a03d47bf7b09ff84
	  System UUID:                1e48881e-a133-4f28-a03d-47bf7b09ff84
	  Boot ID:                    1b12d3a1-294c-4b9b-8f62-e1a31d19c9ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kt8v4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-175414-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m13s
	  kube-system                 kindnet-47nts                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m15s
	  kube-system                 kube-apiserver-ha-175414-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-controller-manager-ha-175414-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-proxy-dcnmc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-scheduler-ha-175414-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-vip-ha-175414-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m15s (x8 over 5m15s)  kubelet          Node ha-175414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m15s (x8 over 5m15s)  kubelet          Node ha-175414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m15s (x7 over 5m15s)  kubelet          Node ha-175414-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  NodeNotReady             99s                    node-controller  Node ha-175414-m02 status is now: NodeNotReady
	
	
	Name:               ha-175414-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_23_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:27:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:23:57 +0000   Thu, 15 Aug 2024 23:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:23:57 +0000   Thu, 15 Aug 2024 23:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:23:57 +0000   Thu, 15 Aug 2024 23:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:23:57 +0000   Thu, 15 Aug 2024 23:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-175414-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03cd54aa1c764ef1be98b373af236f27
	  System UUID:                03cd54aa-1c76-4ef1-be98-b373af236f27
	  Boot ID:                    70b13ab6-f27f-49c0-87ea-06e9fc33a543
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-glqlv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-175414-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m
	  kube-system                 kindnet-fp2gc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m2s
	  kube-system                 kube-apiserver-ha-175414-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-controller-manager-ha-175414-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-proxy-qtps7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-ha-175414-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-vip-ha-175414-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x8 over 4m2s)  kubelet          Node ha-175414-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x8 over 4m2s)  kubelet          Node ha-175414-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x7 over 4m2s)  kubelet          Node ha-175414-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal  RegisteredNode           3m52s                node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	
	
	Name:               ha-175414-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_24_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:24:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:27:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:25:01 +0000   Thu, 15 Aug 2024 23:24:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:25:01 +0000   Thu, 15 Aug 2024 23:24:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:25:01 +0000   Thu, 15 Aug 2024 23:24:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:25:01 +0000   Thu, 15 Aug 2024 23:24:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    ha-175414-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4da843156b4c43e0a4311c72833aae78
	  System UUID:                4da84315-6b4c-43e0-a431-1c72833aae78
	  Boot ID:                    2cdb3f67-21f7-46f8-9d79-849dd6359a7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6bf4q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m58s
	  kube-system                 kube-proxy-jm5fj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m58s (x2 over 2m59s)  kubelet          Node ha-175414-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x2 over 2m59s)  kubelet          Node ha-175414-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x2 over 2m59s)  kubelet          Node ha-175414-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s                  node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal  RegisteredNode           2m56s                  node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal  RegisteredNode           2m54s                  node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal  NodeReady                2m40s                  kubelet          Node ha-175414-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug15 23:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051277] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040299] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.817240] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.555052] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.598331] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 23:21] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.056390] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050948] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.198639] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.119702] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.271672] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.126980] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.023155] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.059629] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.252555] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.087359] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.483452] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.149794] kauditd_printk_skb: 38 callbacks suppressed
	[Aug15 23:22] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391] <==
	{"level":"warn","ts":"2024-08-15T23:27:28.603566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.656429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.673400Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.701425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.740872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.745360Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.760746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.777933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.780967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.791613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.798804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.801467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.805409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.862925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.870670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.874664Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.878405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.887478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.894041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.900431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.901177Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.905509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.908869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.912628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:27:28.918626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:27:29 up 6 min,  0 users,  load average: 0.12, 0.23, 0.12
	Linux ha-175414 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a] <==
	I0815 23:26:49.562915       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:26:59.563893       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:26:59.564082       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:26:59.564357       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:26:59.564415       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:26:59.564506       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:26:59.564533       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:26:59.564612       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:26:59.564631       1 main.go:299] handling current node
	I0815 23:27:09.566458       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:27:09.566531       1 main.go:299] handling current node
	I0815 23:27:09.566564       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:27:09.566573       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:27:09.567015       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:27:09.567057       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:27:09.567175       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:27:09.567203       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:27:19.567852       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:27:19.567980       1 main.go:299] handling current node
	I0815 23:27:19.568046       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:27:19.568065       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:27:19.568357       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:27:19.568409       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:27:19.568522       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:27:19.568544       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8] <==
	I0815 23:21:25.012401       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0815 23:23:27.274010       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0815 23:23:27.274480       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 16.123µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0815 23:23:27.275895       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0815 23:23:27.277086       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0815 23:23:27.278403       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.508199ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0815 23:23:56.025489       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52068: use of closed network connection
	E0815 23:23:56.210192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52100: use of closed network connection
	E0815 23:23:56.395640       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52118: use of closed network connection
	E0815 23:23:56.768921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52158: use of closed network connection
	E0815 23:23:56.954342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52188: use of closed network connection
	E0815 23:23:57.138477       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52212: use of closed network connection
	E0815 23:23:57.324754       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52228: use of closed network connection
	E0815 23:23:57.806212       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52276: use of closed network connection
	E0815 23:23:57.994354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52294: use of closed network connection
	E0815 23:23:58.181153       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52312: use of closed network connection
	E0815 23:23:58.376907       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52324: use of closed network connection
	E0815 23:23:58.555602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52346: use of closed network connection
	E0815 23:23:58.742430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52362: use of closed network connection
	E0815 23:24:30.699162       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0815 23:24:30.699644       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 4.04µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0815 23:24:30.700989       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0815 23:24:30.702285       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0815 23:24:30.703789       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.400637ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-175414-m04.17ec0a787012cea0" result=null
	W0815 23:25:19.216094       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.67]
	
	
	==> kube-controller-manager [0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0] <==
	I0815 23:24:30.566429       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-175414-m04" podCIDRs=["10.244.3.0/24"]
	I0815 23:24:30.566492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:30.566568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:30.831776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:30.917156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:31.478752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:31.495102       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:32.379626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:32.405826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:34.398632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:34.399126       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-175414-m04"
	I0815 23:24:34.436307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:40.829948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:48.211783       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-175414-m04"
	I0815 23:24:48.211938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:48.232862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:49.417111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:25:01.073483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:25:49.444841       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-175414-m04"
	I0815 23:25:49.445121       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	I0815 23:25:49.467896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	I0815 23:25:49.596675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.716986ms"
	I0815 23:25:49.596825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.556µs"
	I0815 23:25:51.492069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	I0815 23:25:54.695663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	
	
	==> kube-proxy [70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:21:26.437594       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:21:26.454428       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E0815 23:21:26.454560       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:21:26.497573       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:21:26.497603       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:21:26.497632       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:21:26.500608       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:21:26.501148       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:21:26.501211       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:21:26.503006       1 config.go:197] "Starting service config controller"
	I0815 23:21:26.503068       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:21:26.503113       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:21:26.503130       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:21:26.507024       1 config.go:326] "Starting node config controller"
	I0815 23:21:26.507056       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:21:26.604288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:21:26.604396       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:21:26.607175       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755] <==
	W0815 23:21:18.828357       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 23:21:18.828571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0815 23:21:21.801578       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:23:51.808213       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kt8v4\": pod busybox-7dff88458-kt8v4 is already assigned to node \"ha-175414-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kt8v4" node="ha-175414-m02"
	E0815 23:23:51.817460       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f5d9ce8-0a98-4378-bc08-df90c934314a(default/busybox-7dff88458-kt8v4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-kt8v4"
	E0815 23:23:51.817514       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kt8v4\": pod busybox-7dff88458-kt8v4 is already assigned to node \"ha-175414-m02\"" pod="default/busybox-7dff88458-kt8v4"
	I0815 23:23:51.817561       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-kt8v4" node="ha-175414-m02"
	E0815 23:23:51.817338       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ztvms\": pod busybox-7dff88458-ztvms is already assigned to node \"ha-175414\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ztvms" node="ha-175414"
	E0815 23:23:51.818669       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ztvms\": pod busybox-7dff88458-ztvms is already assigned to node \"ha-175414\"" pod="default/busybox-7dff88458-ztvms"
	E0815 23:24:31.002905       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lw2tv\": pod kube-proxy-lw2tv is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lw2tv" node="ha-175414-m04"
	E0815 23:24:31.003009       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6591e4e0-ab34-481c-b826-bd56fa0ef01b(kube-system/kube-proxy-lw2tv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lw2tv"
	E0815 23:24:31.003032       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lw2tv\": pod kube-proxy-lw2tv is already assigned to node \"ha-175414-m04\"" pod="kube-system/kube-proxy-lw2tv"
	I0815 23:24:31.003065       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lw2tv" node="ha-175414-m04"
	E0815 23:24:31.009629       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m6wl5\": pod kindnet-m6wl5 is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m6wl5" node="ha-175414-m04"
	E0815 23:24:31.009730       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod efa64311-983a-46d2-88b4-306fc316f564(kube-system/kindnet-m6wl5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-m6wl5"
	E0815 23:24:31.009767       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m6wl5\": pod kindnet-m6wl5 is already assigned to node \"ha-175414-m04\"" pod="kube-system/kindnet-m6wl5"
	I0815 23:24:31.009797       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m6wl5" node="ha-175414-m04"
	E0815 23:24:31.089615       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w68mv\": pod kube-proxy-w68mv is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w68mv" node="ha-175414-m04"
	E0815 23:24:31.093322       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8dece2a7-e846-45c9-81a2-a5766b3e2a59(kube-system/kube-proxy-w68mv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w68mv"
	E0815 23:24:31.093536       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w68mv\": pod kube-proxy-w68mv is already assigned to node \"ha-175414-m04\"" pod="kube-system/kube-proxy-w68mv"
	I0815 23:24:31.093743       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w68mv" node="ha-175414-m04"
	E0815 23:24:31.092964       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-442dg\": pod kindnet-442dg is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-442dg" node="ha-175414-m04"
	E0815 23:24:31.099497       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a7abeee9-7619-4535-9654-3a395026f469(kube-system/kindnet-442dg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-442dg"
	E0815 23:24:31.099565       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-442dg\": pod kindnet-442dg is already assigned to node \"ha-175414-m04\"" pod="kube-system/kindnet-442dg"
	I0815 23:24:31.099706       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-442dg" node="ha-175414-m04"
	
	
	==> kubelet <==
	Aug 15 23:26:10 ha-175414 kubelet[1322]: E0815 23:26:10.981181    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764370980438225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:26:20 ha-175414 kubelet[1322]: E0815 23:26:20.859334    1322 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:26:20 ha-175414 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:26:20 ha-175414 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:26:20 ha-175414 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:26:20 ha-175414 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:26:20 ha-175414 kubelet[1322]: E0815 23:26:20.983433    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764380982958245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:26:20 ha-175414 kubelet[1322]: E0815 23:26:20.983515    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764380982958245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:26:30 ha-175414 kubelet[1322]: E0815 23:26:30.988969    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764390987703822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:26:30 ha-175414 kubelet[1322]: E0815 23:26:30.989222    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764390987703822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:26:40 ha-175414 kubelet[1322]: E0815 23:26:40.990895    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764400990615975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:26:40 ha-175414 kubelet[1322]: E0815 23:26:40.990938    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764400990615975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:26:50 ha-175414 kubelet[1322]: E0815 23:26:50.993170    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764410992938846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:26:50 ha-175414 kubelet[1322]: E0815 23:26:50.993224    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764410992938846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:00 ha-175414 kubelet[1322]: E0815 23:27:00.995340    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764420994918432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:00 ha-175414 kubelet[1322]: E0815 23:27:00.995694    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764420994918432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:10 ha-175414 kubelet[1322]: E0815 23:27:10.997376    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764430996789267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:10 ha-175414 kubelet[1322]: E0815 23:27:10.997470    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764430996789267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:20 ha-175414 kubelet[1322]: E0815 23:27:20.857802    1322 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:27:20 ha-175414 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:27:20 ha-175414 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:27:20 ha-175414 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:27:20 ha-175414 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:27:20 ha-175414 kubelet[1322]: E0815 23:27:20.999908    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764440999195309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:20 ha-175414 kubelet[1322]: E0815 23:27:20.999949    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764440999195309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-175414 -n ha-175414
helpers_test.go:261: (dbg) Run:  kubectl --context ha-175414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 3 (3.198799508s)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:27:33.494611   35454 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:27:33.494738   35454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:33.494748   35454 out.go:358] Setting ErrFile to fd 2...
	I0815 23:27:33.494755   35454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:33.494932   35454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:27:33.495087   35454 out.go:352] Setting JSON to false
	I0815 23:27:33.495113   35454 mustload.go:65] Loading cluster: ha-175414
	I0815 23:27:33.495239   35454 notify.go:220] Checking for updates...
	I0815 23:27:33.495608   35454 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:27:33.495628   35454 status.go:255] checking status of ha-175414 ...
	I0815 23:27:33.496110   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:33.496151   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:33.515588   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46675
	I0815 23:27:33.516069   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:33.516719   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:33.516749   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:33.517081   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:33.517309   35454 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:27:33.519082   35454 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:27:33.519101   35454 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:33.519426   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:33.519456   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:33.534227   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0815 23:27:33.534719   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:33.535286   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:33.535315   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:33.535611   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:33.535788   35454 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:27:33.538694   35454 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:33.539144   35454 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:33.539175   35454 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:33.539366   35454 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:33.539699   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:33.539745   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:33.554944   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I0815 23:27:33.555282   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:33.555723   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:33.555742   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:33.556059   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:33.556226   35454 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:27:33.556419   35454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:33.556445   35454 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:27:33.559241   35454 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:33.559703   35454 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:33.559732   35454 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:33.560223   35454 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:27:33.560448   35454 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:27:33.560584   35454 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:27:33.560792   35454 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:27:33.642084   35454 ssh_runner.go:195] Run: systemctl --version
	I0815 23:27:33.649085   35454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:33.666157   35454 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:33.666187   35454 api_server.go:166] Checking apiserver status ...
	I0815 23:27:33.666223   35454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:33.684311   35454 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:27:33.699944   35454 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:33.699994   35454 ssh_runner.go:195] Run: ls
	I0815 23:27:33.704926   35454 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:33.709015   35454 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:33.709037   35454 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:27:33.709046   35454 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:33.709061   35454 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:27:33.709338   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:33.709373   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:33.723833   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0815 23:27:33.724263   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:33.724811   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:33.724829   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:33.725111   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:33.725321   35454 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:27:33.727143   35454 status.go:330] ha-175414-m02 host status = "Running" (err=<nil>)
	I0815 23:27:33.727158   35454 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:33.727460   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:33.727514   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:33.742547   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0815 23:27:33.742915   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:33.743514   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:33.743535   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:33.743832   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:33.744007   35454 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:27:33.746948   35454 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:33.747443   35454 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:33.747475   35454 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:33.747623   35454 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:33.747914   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:33.747947   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:33.762358   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I0815 23:27:33.762779   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:33.763236   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:33.763254   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:33.763520   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:33.763692   35454 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:27:33.763929   35454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:33.763951   35454 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:27:33.766519   35454 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:33.767024   35454 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:33.767051   35454 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:33.767176   35454 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:27:33.767333   35454 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:27:33.767486   35454 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:27:33.767678   35454 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	W0815 23:27:36.298194   35454 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:27:36.298284   35454 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E0815 23:27:36.298301   35454 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:36.298310   35454 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 23:27:36.298335   35454 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:36.298342   35454 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:27:36.298627   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:36.298662   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:36.313457   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0815 23:27:36.313978   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:36.314603   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:36.314628   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:36.314934   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:36.315130   35454 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:27:36.316764   35454 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:27:36.316782   35454 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:36.317166   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:36.317214   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:36.331750   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0815 23:27:36.332155   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:36.332569   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:36.332593   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:36.332976   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:36.333164   35454 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:27:36.336176   35454 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:36.336607   35454 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:36.336639   35454 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:36.336843   35454 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:36.337152   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:36.337191   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:36.351998   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0815 23:27:36.352372   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:36.352850   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:36.352877   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:36.353257   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:36.353450   35454 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:27:36.353664   35454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:36.353686   35454 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:27:36.356694   35454 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:36.357122   35454 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:36.357144   35454 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:36.357307   35454 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:27:36.357466   35454 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:27:36.357584   35454 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:27:36.357697   35454 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:27:36.438531   35454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:36.453103   35454 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:36.453127   35454 api_server.go:166] Checking apiserver status ...
	I0815 23:27:36.453159   35454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:36.474738   35454 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:27:36.486055   35454 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:36.486105   35454 ssh_runner.go:195] Run: ls
	I0815 23:27:36.490734   35454 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:36.495162   35454 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:36.495187   35454 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:27:36.495196   35454 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:36.495221   35454 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:27:36.495504   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:36.495541   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:36.510396   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34241
	I0815 23:27:36.510859   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:36.511349   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:36.511370   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:36.511664   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:36.511867   35454 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:27:36.513374   35454 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:27:36.513388   35454 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:36.513686   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:36.513721   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:36.528608   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35165
	I0815 23:27:36.529043   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:36.529453   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:36.529473   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:36.529774   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:36.529968   35454 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:27:36.532551   35454 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:36.533039   35454 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:36.533071   35454 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:36.533202   35454 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:36.533538   35454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:36.533574   35454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:36.548108   35454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0815 23:27:36.548516   35454 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:36.549049   35454 main.go:141] libmachine: Using API Version  1
	I0815 23:27:36.549074   35454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:36.549351   35454 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:36.549572   35454 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:27:36.549736   35454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:36.549752   35454 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:27:36.552763   35454 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:36.553187   35454 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:36.553213   35454 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:36.553356   35454 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:27:36.553520   35454 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:27:36.553675   35454 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:27:36.553858   35454 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:27:36.637389   35454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:36.652998   35454 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
E0815 23:27:37.660325   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 3 (5.294987774s)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:27:37.532119   35554 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:27:37.532248   35554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:37.532256   35554 out.go:358] Setting ErrFile to fd 2...
	I0815 23:27:37.532260   35554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:37.532460   35554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:27:37.532617   35554 out.go:352] Setting JSON to false
	I0815 23:27:37.532641   35554 mustload.go:65] Loading cluster: ha-175414
	I0815 23:27:37.532760   35554 notify.go:220] Checking for updates...
	I0815 23:27:37.532981   35554 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:27:37.532995   35554 status.go:255] checking status of ha-175414 ...
	I0815 23:27:37.533340   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:37.533401   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:37.548361   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0815 23:27:37.548947   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:37.549488   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:37.549524   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:37.549832   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:37.550025   35554 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:27:37.551682   35554 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:27:37.551709   35554 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:37.551968   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:37.551998   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:37.567083   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I0815 23:27:37.567499   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:37.568097   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:37.568130   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:37.568481   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:37.568656   35554 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:27:37.571707   35554 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:37.572193   35554 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:37.572215   35554 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:37.572467   35554 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:37.572789   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:37.572827   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:37.588479   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0815 23:27:37.588825   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:37.589275   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:37.589296   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:37.589631   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:37.589884   35554 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:27:37.590081   35554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:37.590112   35554 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:27:37.592906   35554 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:37.593327   35554 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:37.593356   35554 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:37.593499   35554 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:27:37.593668   35554 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:27:37.593828   35554 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:27:37.593991   35554 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:27:37.681025   35554 ssh_runner.go:195] Run: systemctl --version
	I0815 23:27:37.688247   35554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:37.705316   35554 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:37.705345   35554 api_server.go:166] Checking apiserver status ...
	I0815 23:27:37.705376   35554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:37.720757   35554 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:27:37.731604   35554 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:37.731668   35554 ssh_runner.go:195] Run: ls
	I0815 23:27:37.736082   35554 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:37.742001   35554 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:37.742025   35554 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:27:37.742037   35554 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:37.742061   35554 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:27:37.742340   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:37.742380   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:37.759174   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33413
	I0815 23:27:37.759647   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:37.760114   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:37.760138   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:37.760407   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:37.760596   35554 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:27:37.761989   35554 status.go:330] ha-175414-m02 host status = "Running" (err=<nil>)
	I0815 23:27:37.762007   35554 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:37.762292   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:37.762330   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:37.776567   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0815 23:27:37.777001   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:37.777427   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:37.777449   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:37.777683   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:37.777871   35554 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:27:37.780936   35554 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:37.781398   35554 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:37.781421   35554 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:37.781577   35554 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:37.781940   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:37.781979   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:37.797277   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I0815 23:27:37.797651   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:37.798116   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:37.798137   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:37.798400   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:37.798587   35554 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:27:37.798778   35554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:37.798799   35554 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:27:37.801359   35554 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:37.801721   35554 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:37.801755   35554 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:37.801865   35554 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:27:37.802032   35554 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:27:37.802157   35554 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:27:37.802287   35554 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	W0815 23:27:39.370197   35554 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:39.370254   35554 retry.go:31] will retry after 359.157131ms: dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:27:42.442118   35554 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:27:42.442220   35554 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E0815 23:27:42.442238   35554 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:42.442245   35554 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 23:27:42.442263   35554 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:42.442274   35554 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:27:42.442555   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:42.442601   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:42.457123   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44529
	I0815 23:27:42.457558   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:42.458012   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:42.458043   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:42.458338   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:42.458536   35554 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:27:42.460063   35554 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:27:42.460077   35554 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:42.460366   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:42.460412   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:42.475359   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37731
	I0815 23:27:42.475762   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:42.476248   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:42.476267   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:42.476574   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:42.476768   35554 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:27:42.479582   35554 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:42.479996   35554 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:42.480021   35554 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:42.480162   35554 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:42.480562   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:42.480606   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:42.495566   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43709
	I0815 23:27:42.495945   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:42.496428   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:42.496475   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:42.496790   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:42.496990   35554 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:27:42.497172   35554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:42.497190   35554 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:27:42.499855   35554 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:42.500279   35554 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:42.500308   35554 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:42.500415   35554 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:27:42.500564   35554 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:27:42.500708   35554 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:27:42.500845   35554 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:27:42.581288   35554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:42.595993   35554 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:42.596018   35554 api_server.go:166] Checking apiserver status ...
	I0815 23:27:42.596050   35554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:42.610044   35554 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:27:42.619579   35554 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:42.619655   35554 ssh_runner.go:195] Run: ls
	I0815 23:27:42.624435   35554 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:42.629163   35554 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:42.629189   35554 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:27:42.629197   35554 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:42.629211   35554 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:27:42.629490   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:42.629528   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:42.644427   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0815 23:27:42.644801   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:42.645249   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:42.645269   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:42.645652   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:42.645838   35554 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:27:42.647449   35554 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:27:42.647466   35554 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:42.647762   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:42.647797   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:42.662567   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0815 23:27:42.662960   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:42.663432   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:42.663453   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:42.663774   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:42.663945   35554 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:27:42.666946   35554 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:42.667352   35554 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:42.667377   35554 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:42.667492   35554 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:42.667794   35554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:42.667835   35554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:42.683589   35554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I0815 23:27:42.683992   35554 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:42.684444   35554 main.go:141] libmachine: Using API Version  1
	I0815 23:27:42.684466   35554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:42.684730   35554 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:42.684910   35554 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:27:42.685097   35554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:42.685123   35554 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:27:42.687970   35554 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:42.688366   35554 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:42.688402   35554 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:42.688571   35554 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:27:42.688729   35554 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:27:42.688855   35554 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:27:42.688985   35554 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:27:42.770188   35554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:42.784824   35554 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 3 (4.077941175s)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:27:45.032347   35654 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:27:45.032575   35654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:45.032583   35654 out.go:358] Setting ErrFile to fd 2...
	I0815 23:27:45.032587   35654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:45.032755   35654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:27:45.032923   35654 out.go:352] Setting JSON to false
	I0815 23:27:45.032947   35654 mustload.go:65] Loading cluster: ha-175414
	I0815 23:27:45.032984   35654 notify.go:220] Checking for updates...
	I0815 23:27:45.033378   35654 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:27:45.033392   35654 status.go:255] checking status of ha-175414 ...
	I0815 23:27:45.033868   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:45.033927   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:45.049142   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0815 23:27:45.049561   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:45.050219   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:45.050250   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:45.050584   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:45.050744   35654 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:27:45.052296   35654 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:27:45.052312   35654 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:45.052645   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:45.052685   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:45.067965   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0815 23:27:45.068407   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:45.068854   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:45.068883   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:45.069189   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:45.069377   35654 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:27:45.072308   35654 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:45.072763   35654 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:45.072789   35654 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:45.072886   35654 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:45.073244   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:45.073293   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:45.088177   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0815 23:27:45.088546   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:45.088977   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:45.088997   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:45.089388   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:45.089581   35654 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:27:45.089767   35654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:45.089785   35654 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:27:45.092654   35654 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:45.093046   35654 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:45.093066   35654 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:45.093201   35654 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:27:45.093375   35654 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:27:45.093524   35654 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:27:45.093670   35654 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:27:45.177537   35654 ssh_runner.go:195] Run: systemctl --version
	I0815 23:27:45.185365   35654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:45.201457   35654 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:45.201489   35654 api_server.go:166] Checking apiserver status ...
	I0815 23:27:45.201520   35654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:45.215963   35654 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:27:45.226504   35654 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:45.226568   35654 ssh_runner.go:195] Run: ls
	I0815 23:27:45.230841   35654 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:45.235202   35654 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:45.235223   35654 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:27:45.235232   35654 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:45.235253   35654 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:27:45.235540   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:45.235570   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:45.251343   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
	I0815 23:27:45.251822   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:45.252273   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:45.252291   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:45.252613   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:45.252756   35654 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:27:45.254323   35654 status.go:330] ha-175414-m02 host status = "Running" (err=<nil>)
	I0815 23:27:45.254341   35654 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:45.254612   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:45.254642   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:45.269884   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I0815 23:27:45.270254   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:45.270689   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:45.270708   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:45.271008   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:45.271220   35654 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:27:45.274369   35654 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:45.274854   35654 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:45.274880   35654 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:45.275028   35654 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:45.275438   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:45.275474   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:45.291859   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I0815 23:27:45.292328   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:45.292759   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:45.292780   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:45.293136   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:45.293336   35654 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:27:45.293520   35654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:45.293538   35654 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:27:45.296478   35654 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:45.296865   35654 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:45.296889   35654 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:45.297055   35654 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:27:45.297226   35654 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:27:45.297403   35654 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:27:45.297551   35654 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	W0815 23:27:45.514042   35654 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:45.514086   35654 retry.go:31] will retry after 141.958764ms: dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:27:48.718126   35654 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:27:48.718222   35654 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E0815 23:27:48.718247   35654 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:48.718260   35654 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 23:27:48.718289   35654 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:48.718301   35654 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:27:48.718610   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:48.718661   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:48.734311   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I0815 23:27:48.734805   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:48.735279   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:48.735298   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:48.735561   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:48.735769   35654 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:27:48.737233   35654 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:27:48.737249   35654 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:48.737525   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:48.737555   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:48.752900   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
	I0815 23:27:48.753301   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:48.753695   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:48.753712   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:48.754054   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:48.754233   35654 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:27:48.757072   35654 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:48.757470   35654 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:48.757503   35654 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:48.757623   35654 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:48.757978   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:48.758023   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:48.772465   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I0815 23:27:48.772882   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:48.773375   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:48.773401   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:48.773823   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:48.774013   35654 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:27:48.774222   35654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:48.774241   35654 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:27:48.776915   35654 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:48.777302   35654 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:48.777330   35654 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:48.777463   35654 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:27:48.777623   35654 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:27:48.777776   35654 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:27:48.777924   35654 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:27:48.858023   35654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:48.872566   35654 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:48.872590   35654 api_server.go:166] Checking apiserver status ...
	I0815 23:27:48.872630   35654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:48.886699   35654 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:27:48.896477   35654 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:48.896535   35654 ssh_runner.go:195] Run: ls
	I0815 23:27:48.900805   35654 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:48.906685   35654 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:48.906708   35654 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:27:48.906715   35654 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:48.906729   35654 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:27:48.907005   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:48.907036   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:48.922624   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
	I0815 23:27:48.922991   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:48.923421   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:48.923461   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:48.923805   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:48.924011   35654 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:27:48.925590   35654 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:27:48.925604   35654 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:48.925929   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:48.925963   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:48.940413   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0815 23:27:48.940831   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:48.941340   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:48.941365   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:48.941628   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:48.941808   35654 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:27:48.944306   35654 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:48.944690   35654 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:48.944713   35654 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:48.944846   35654 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:48.945122   35654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:48.945158   35654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:48.959725   35654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37889
	I0815 23:27:48.960134   35654 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:48.960548   35654 main.go:141] libmachine: Using API Version  1
	I0815 23:27:48.960569   35654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:48.960929   35654 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:48.961080   35654 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:27:48.961262   35654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:48.961281   35654 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:27:48.964303   35654 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:48.964708   35654 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:48.964730   35654 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:48.964905   35654 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:27:48.965059   35654 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:27:48.965166   35654 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:27:48.965254   35654 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:27:49.049596   35654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:49.063902   35654 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
E0815 23:27:51.159520   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 3 (4.752355817s)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:27:50.496691   35754 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:27:50.496797   35754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:50.496805   35754 out.go:358] Setting ErrFile to fd 2...
	I0815 23:27:50.496810   35754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:50.496981   35754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:27:50.497136   35754 out.go:352] Setting JSON to false
	I0815 23:27:50.497160   35754 mustload.go:65] Loading cluster: ha-175414
	I0815 23:27:50.497257   35754 notify.go:220] Checking for updates...
	I0815 23:27:50.497532   35754 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:27:50.497544   35754 status.go:255] checking status of ha-175414 ...
	I0815 23:27:50.497981   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:50.498035   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:50.518090   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0815 23:27:50.518607   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:50.519226   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:50.519249   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:50.519558   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:50.519734   35754 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:27:50.521391   35754 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:27:50.521409   35754 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:50.521685   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:50.521721   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:50.537229   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0815 23:27:50.537649   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:50.538125   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:50.538148   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:50.538467   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:50.538634   35754 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:27:50.541566   35754 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:50.541968   35754 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:50.541992   35754 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:50.542120   35754 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:50.542401   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:50.542433   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:50.557287   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43621
	I0815 23:27:50.557754   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:50.558296   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:50.558313   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:50.558688   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:50.558891   35754 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:27:50.559112   35754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:50.559139   35754 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:27:50.562364   35754 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:50.562902   35754 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:50.562924   35754 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:50.563157   35754 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:27:50.563364   35754 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:27:50.563557   35754 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:27:50.563770   35754 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:27:50.646602   35754 ssh_runner.go:195] Run: systemctl --version
	I0815 23:27:50.654895   35754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:50.675069   35754 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:50.675101   35754 api_server.go:166] Checking apiserver status ...
	I0815 23:27:50.675152   35754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:50.691853   35754 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:27:50.702780   35754 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:50.702844   35754 ssh_runner.go:195] Run: ls
	I0815 23:27:50.707975   35754 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:50.712173   35754 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:50.712205   35754 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:27:50.712218   35754 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:50.712239   35754 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:27:50.712549   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:50.712584   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:50.727624   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I0815 23:27:50.727993   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:50.728470   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:50.728488   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:50.728805   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:50.729045   35754 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:27:50.730527   35754 status.go:330] ha-175414-m02 host status = "Running" (err=<nil>)
	I0815 23:27:50.730543   35754 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:50.730943   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:50.730981   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:50.746097   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41331
	I0815 23:27:50.746541   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:50.747083   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:50.747117   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:50.747442   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:50.747643   35754 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:27:50.750336   35754 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:50.750768   35754 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:50.750795   35754 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:50.750915   35754 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:50.751201   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:50.751233   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:50.766231   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0815 23:27:50.766696   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:50.767129   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:50.767146   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:50.767464   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:50.767633   35754 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:27:50.767974   35754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:50.767994   35754 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:27:50.770850   35754 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:50.771277   35754 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:50.771305   35754 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:50.771444   35754 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:27:50.771617   35754 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:27:50.771764   35754 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:27:50.771889   35754 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	W0815 23:27:51.786174   35754 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:51.786217   35754 retry.go:31] will retry after 287.83226ms: dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:27:54.858182   35754 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:27:54.858266   35754 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E0815 23:27:54.858289   35754 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:54.858303   35754 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 23:27:54.858340   35754 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:27:54.858353   35754 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:27:54.858695   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:54.858761   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:54.874298   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46805
	I0815 23:27:54.874754   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:54.875297   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:54.875337   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:54.875656   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:54.875858   35754 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:27:54.877298   35754 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:27:54.877313   35754 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:54.877602   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:54.877642   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:54.892736   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0815 23:27:54.893170   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:54.893667   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:54.893687   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:54.894029   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:54.894204   35754 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:27:54.896823   35754 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:54.897216   35754 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:54.897237   35754 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:54.897367   35754 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:27:54.897656   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:54.897690   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:54.913077   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0815 23:27:54.913587   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:54.914070   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:54.914091   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:54.914382   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:54.914520   35754 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:27:54.914707   35754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:54.914733   35754 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:27:54.917338   35754 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:54.917767   35754 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:27:54.917799   35754 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:27:54.917986   35754 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:27:54.918174   35754 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:27:54.918333   35754 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:27:54.918462   35754 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:27:54.997770   35754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:55.016407   35754 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:55.016434   35754 api_server.go:166] Checking apiserver status ...
	I0815 23:27:55.016476   35754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:55.031373   35754 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:27:55.041397   35754 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:55.041458   35754 ssh_runner.go:195] Run: ls
	I0815 23:27:55.045889   35754 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:55.050200   35754 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:55.050228   35754 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:27:55.050239   35754 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:55.050265   35754 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:27:55.050656   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:55.050695   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:55.065705   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40909
	I0815 23:27:55.066118   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:55.066544   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:55.066568   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:55.066882   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:55.067070   35754 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:27:55.068593   35754 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:27:55.068610   35754 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:55.068933   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:55.068969   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:55.083774   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0815 23:27:55.084248   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:55.084730   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:55.084751   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:55.085039   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:55.085184   35754 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:27:55.087941   35754 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:55.088350   35754 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:55.088387   35754 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:55.088548   35754 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:27:55.088836   35754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:55.088879   35754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:55.105590   35754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I0815 23:27:55.106041   35754 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:55.106522   35754 main.go:141] libmachine: Using API Version  1
	I0815 23:27:55.106544   35754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:55.106905   35754 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:55.107113   35754 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:27:55.107338   35754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:55.107362   35754 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:27:55.110426   35754 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:55.110931   35754 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:27:55.110959   35754 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:27:55.111163   35754 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:27:55.111352   35754 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:27:55.111548   35754 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:27:55.111725   35754 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:27:55.194081   35754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:55.208279   35754 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 3 (3.734878211s)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:27:57.915092   35870 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:27:57.915329   35870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:57.915338   35870 out.go:358] Setting ErrFile to fd 2...
	I0815 23:27:57.915342   35870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:27:57.915515   35870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:27:57.915681   35870 out.go:352] Setting JSON to false
	I0815 23:27:57.915714   35870 mustload.go:65] Loading cluster: ha-175414
	I0815 23:27:57.915871   35870 notify.go:220] Checking for updates...
	I0815 23:27:57.916060   35870 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:27:57.916074   35870 status.go:255] checking status of ha-175414 ...
	I0815 23:27:57.916458   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:57.916507   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:57.935862   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I0815 23:27:57.936290   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:57.936938   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:27:57.936981   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:57.937304   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:57.937485   35870 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:27:57.939346   35870 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:27:57.939362   35870 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:57.939678   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:57.939717   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:57.954661   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42793
	I0815 23:27:57.955128   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:57.955539   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:27:57.955556   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:57.955934   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:57.956137   35870 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:27:57.958808   35870 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:57.959226   35870 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:57.959250   35870 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:57.959386   35870 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:27:57.959665   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:57.959706   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:57.975084   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I0815 23:27:57.975524   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:57.975993   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:27:57.976016   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:57.976372   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:57.976630   35870 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:27:57.976858   35870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:57.976882   35870 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:27:57.979591   35870 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:57.980100   35870 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:27:57.980120   35870 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:27:57.980306   35870 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:27:57.980487   35870 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:27:57.980637   35870 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:27:57.980767   35870 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:27:58.062518   35870 ssh_runner.go:195] Run: systemctl --version
	I0815 23:27:58.068761   35870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:27:58.084120   35870 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:27:58.084152   35870 api_server.go:166] Checking apiserver status ...
	I0815 23:27:58.084189   35870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:27:58.100137   35870 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:27:58.112377   35870 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:27:58.112427   35870 ssh_runner.go:195] Run: ls
	I0815 23:27:58.117028   35870 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:27:58.121321   35870 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:27:58.121342   35870 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:27:58.121354   35870 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:27:58.121393   35870 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:27:58.121681   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:58.121729   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:58.136468   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0815 23:27:58.136884   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:58.137422   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:27:58.137447   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:58.137781   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:58.138040   35870 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:27:58.139361   35870 status.go:330] ha-175414-m02 host status = "Running" (err=<nil>)
	I0815 23:27:58.139378   35870 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:58.139759   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:58.139813   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:58.154957   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0815 23:27:58.155314   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:58.155778   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:27:58.155813   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:58.156131   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:58.156276   35870 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:27:58.159242   35870 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:58.159737   35870 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:58.159763   35870 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:58.159886   35870 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:27:58.160345   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:27:58.160414   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:27:58.174958   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0815 23:27:58.175344   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:27:58.175782   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:27:58.175803   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:27:58.176149   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:27:58.176325   35870 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:27:58.176505   35870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:27:58.176532   35870 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:27:58.179025   35870 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:58.179414   35870 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:27:58.179451   35870 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:27:58.179749   35870 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:27:58.179945   35870 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:27:58.180110   35870 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:27:58.180251   35870 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	W0815 23:28:01.258100   35870 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:28:01.258218   35870 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E0815 23:28:01.258237   35870 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:28:01.258250   35870 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 23:28:01.258267   35870 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:28:01.258274   35870 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:28:01.258565   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:01.258606   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:01.273385   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I0815 23:28:01.273828   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:01.274325   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:28:01.274346   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:01.274656   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:01.274840   35870 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:28:01.276387   35870 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:28:01.276404   35870 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:28:01.276682   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:01.276731   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:01.291883   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I0815 23:28:01.292317   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:01.292720   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:28:01.292744   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:01.293085   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:01.293254   35870 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:28:01.296271   35870 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:01.296712   35870 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:01.296731   35870 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:01.296893   35870 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:28:01.297181   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:01.297221   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:01.312299   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0815 23:28:01.312672   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:01.313180   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:28:01.313199   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:01.313481   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:01.313649   35870 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:28:01.313821   35870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:01.313858   35870 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:28:01.316745   35870 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:01.317071   35870 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:01.317103   35870 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:01.317257   35870 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:28:01.317412   35870 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:28:01.317513   35870 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:28:01.317639   35870 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:28:01.398224   35870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:01.414025   35870 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:28:01.414057   35870 api_server.go:166] Checking apiserver status ...
	I0815 23:28:01.414098   35870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:28:01.428473   35870 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:28:01.441139   35870 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:28:01.441213   35870 ssh_runner.go:195] Run: ls
	I0815 23:28:01.446037   35870 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:28:01.450317   35870 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:28:01.450340   35870 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:28:01.450352   35870 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:01.450369   35870 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:28:01.450655   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:01.450693   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:01.465371   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I0815 23:28:01.465722   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:01.466184   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:28:01.466206   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:01.466576   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:01.466794   35870 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:28:01.468479   35870 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:28:01.468492   35870 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:28:01.468765   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:01.468805   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:01.483645   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41805
	I0815 23:28:01.484124   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:01.484582   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:28:01.484609   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:01.484927   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:01.485111   35870 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:28:01.488024   35870 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:01.488439   35870 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:01.488462   35870 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:01.488595   35870 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:28:01.488983   35870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:01.489024   35870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:01.503558   35870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I0815 23:28:01.503979   35870 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:01.504407   35870 main.go:141] libmachine: Using API Version  1
	I0815 23:28:01.504431   35870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:01.504704   35870 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:01.504883   35870 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:28:01.505057   35870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:01.505090   35870 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:28:01.507756   35870 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:01.508254   35870 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:01.508288   35870 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:01.508368   35870 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:28:01.508530   35870 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:28:01.508685   35870 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:28:01.508836   35870 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:28:01.593219   35870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:01.606957   35870 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 3 (3.722173081s)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:28:07.523887   35987 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:28:07.524151   35987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:28:07.524162   35987 out.go:358] Setting ErrFile to fd 2...
	I0815 23:28:07.524167   35987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:28:07.524362   35987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:28:07.524540   35987 out.go:352] Setting JSON to false
	I0815 23:28:07.524569   35987 mustload.go:65] Loading cluster: ha-175414
	I0815 23:28:07.524688   35987 notify.go:220] Checking for updates...
	I0815 23:28:07.524995   35987 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:28:07.525011   35987 status.go:255] checking status of ha-175414 ...
	I0815 23:28:07.525439   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:07.525500   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:07.543989   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0815 23:28:07.544426   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:07.545010   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:07.545047   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:07.545383   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:07.545621   35987 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:28:07.547312   35987 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:28:07.547327   35987 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:28:07.547730   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:07.547772   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:07.563623   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0815 23:28:07.564042   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:07.564525   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:07.564550   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:07.564932   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:07.565107   35987 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:28:07.567638   35987 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:07.568070   35987 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:28:07.568094   35987 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:07.568232   35987 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:28:07.568507   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:07.568550   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:07.583802   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I0815 23:28:07.584266   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:07.584791   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:07.584817   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:07.585145   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:07.585306   35987 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:28:07.585482   35987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:07.585518   35987 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:28:07.588653   35987 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:07.589122   35987 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:28:07.589156   35987 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:07.589374   35987 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:28:07.589587   35987 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:28:07.589885   35987 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:28:07.590140   35987 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:28:07.680004   35987 ssh_runner.go:195] Run: systemctl --version
	I0815 23:28:07.688330   35987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:07.709681   35987 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:28:07.709716   35987 api_server.go:166] Checking apiserver status ...
	I0815 23:28:07.709755   35987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:28:07.724480   35987 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:28:07.734714   35987 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:28:07.734774   35987 ssh_runner.go:195] Run: ls
	I0815 23:28:07.739457   35987 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:28:07.743756   35987 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:28:07.743779   35987 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:28:07.743797   35987 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:07.743819   35987 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:28:07.744147   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:07.744186   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:07.759761   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0815 23:28:07.760200   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:07.760676   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:07.760701   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:07.760962   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:07.761123   35987 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:28:07.762466   35987 status.go:330] ha-175414-m02 host status = "Running" (err=<nil>)
	I0815 23:28:07.762483   35987 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:28:07.762755   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:07.762805   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:07.777270   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0815 23:28:07.777642   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:07.778138   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:07.778160   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:07.778454   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:07.778635   35987 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:28:07.781793   35987 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:28:07.782303   35987 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:28:07.782345   35987 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:28:07.782451   35987 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:28:07.782865   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:07.782912   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:07.798312   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0815 23:28:07.798744   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:07.799204   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:07.799217   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:07.799562   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:07.799771   35987 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:28:07.799995   35987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:07.800018   35987 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:28:07.802953   35987 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:28:07.803376   35987 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:28:07.803402   35987 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:28:07.803579   35987 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:28:07.803769   35987 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:28:07.803928   35987 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:28:07.804038   35987 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	W0815 23:28:10.858080   35987 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.19:22: connect: no route to host
	W0815 23:28:10.858201   35987 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E0815 23:28:10.858223   35987 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:28:10.858234   35987 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 23:28:10.858257   35987 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	I0815 23:28:10.858269   35987 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:28:10.858588   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:10.858633   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:10.873143   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0815 23:28:10.873504   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:10.873957   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:10.873980   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:10.874305   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:10.874483   35987 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:28:10.876141   35987 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:28:10.876155   35987 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:28:10.876463   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:10.876499   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:10.891107   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
	I0815 23:28:10.891532   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:10.891935   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:10.891955   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:10.892246   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:10.892417   35987 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:28:10.895416   35987 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:10.895899   35987 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:10.895917   35987 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:10.896047   35987 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:28:10.896444   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:10.896487   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:10.911562   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0815 23:28:10.911950   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:10.912394   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:10.912412   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:10.912715   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:10.912902   35987 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:28:10.913082   35987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:10.913098   35987 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:28:10.915901   35987 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:10.916284   35987 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:10.916303   35987 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:10.916477   35987 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:28:10.916630   35987 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:28:10.916769   35987 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:28:10.916908   35987 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:28:10.999055   35987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:11.016917   35987 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:28:11.016949   35987 api_server.go:166] Checking apiserver status ...
	I0815 23:28:11.016988   35987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:28:11.031426   35987 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:28:11.041021   35987 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:28:11.041094   35987 ssh_runner.go:195] Run: ls
	I0815 23:28:11.045628   35987 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:28:11.050046   35987 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:28:11.050067   35987 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:28:11.050077   35987 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:11.050095   35987 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:28:11.050495   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:11.050531   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:11.064942   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0815 23:28:11.065378   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:11.065866   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:11.065890   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:11.066182   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:11.066345   35987 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:28:11.067884   35987 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:28:11.067900   35987 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:28:11.068184   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:11.068216   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:11.082812   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36583
	I0815 23:28:11.083208   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:11.083663   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:11.083682   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:11.083979   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:11.084197   35987 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:28:11.087115   35987 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:11.087507   35987 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:11.087535   35987 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:11.087674   35987 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:28:11.087977   35987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:11.088023   35987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:11.102453   35987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I0815 23:28:11.102855   35987 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:11.103304   35987 main.go:141] libmachine: Using API Version  1
	I0815 23:28:11.103326   35987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:11.103655   35987 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:11.103818   35987 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:28:11.104011   35987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:11.104035   35987 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:28:11.107244   35987 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:11.107658   35987 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:11.107681   35987 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:11.107853   35987 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:28:11.108047   35987 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:28:11.108201   35987 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:28:11.108337   35987 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:28:11.189303   35987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:11.204353   35987 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 7 (614.856503ms)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:28:21.680417   36125 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:28:21.680536   36125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:28:21.680548   36125 out.go:358] Setting ErrFile to fd 2...
	I0815 23:28:21.680555   36125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:28:21.680822   36125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:28:21.681076   36125 out.go:352] Setting JSON to false
	I0815 23:28:21.681111   36125 mustload.go:65] Loading cluster: ha-175414
	I0815 23:28:21.681182   36125 notify.go:220] Checking for updates...
	I0815 23:28:21.681634   36125 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:28:21.681654   36125 status.go:255] checking status of ha-175414 ...
	I0815 23:28:21.682403   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:21.682457   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:21.697284   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0815 23:28:21.697684   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:21.698286   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:21.698318   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:21.698629   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:21.698984   36125 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:28:21.700689   36125 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:28:21.700703   36125 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:28:21.700971   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:21.701001   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:21.716059   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0815 23:28:21.716524   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:21.716995   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:21.717016   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:21.717306   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:21.717491   36125 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:28:21.720172   36125 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:21.720577   36125 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:28:21.720600   36125 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:21.720720   36125 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:28:21.721032   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:21.721072   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:21.737580   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41497
	I0815 23:28:21.737989   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:21.738430   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:21.738449   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:21.738761   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:21.738922   36125 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:28:21.739102   36125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:21.739129   36125 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:28:21.741667   36125 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:21.742134   36125 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:28:21.742164   36125 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:21.742296   36125 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:28:21.742473   36125 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:28:21.742655   36125 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:28:21.742828   36125 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:28:21.826248   36125 ssh_runner.go:195] Run: systemctl --version
	I0815 23:28:21.832233   36125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:21.849451   36125 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:28:21.849481   36125 api_server.go:166] Checking apiserver status ...
	I0815 23:28:21.849518   36125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:28:21.865519   36125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:28:21.876412   36125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:28:21.876475   36125 ssh_runner.go:195] Run: ls
	I0815 23:28:21.881949   36125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:28:21.887951   36125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:28:21.887974   36125 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:28:21.887983   36125 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:21.887997   36125 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:28:21.888321   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:21.888359   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:21.902913   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0815 23:28:21.903434   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:21.903876   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:21.903901   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:21.904188   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:21.904370   36125 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:28:21.906004   36125 status.go:330] ha-175414-m02 host status = "Stopped" (err=<nil>)
	I0815 23:28:21.906017   36125 status.go:343] host is not running, skipping remaining checks
	I0815 23:28:21.906025   36125 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:21.906044   36125 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:28:21.906338   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:21.906381   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:21.920965   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0815 23:28:21.921394   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:21.921968   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:21.922002   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:21.922356   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:21.922507   36125 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:28:21.924147   36125 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:28:21.924164   36125 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:28:21.924459   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:21.924488   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:21.938773   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0815 23:28:21.939092   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:21.939555   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:21.939575   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:21.939919   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:21.940127   36125 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:28:21.942840   36125 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:21.943357   36125 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:21.943380   36125 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:21.943491   36125 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:28:21.943779   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:21.943820   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:21.959187   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0815 23:28:21.959548   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:21.960047   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:21.960075   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:21.960383   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:21.960569   36125 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:28:21.960878   36125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:21.960903   36125 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:28:21.963842   36125 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:21.964236   36125 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:21.964259   36125 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:21.964402   36125 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:28:21.964578   36125 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:28:21.964725   36125 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:28:21.964865   36125 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:28:22.041600   36125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:22.056975   36125 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:28:22.057009   36125 api_server.go:166] Checking apiserver status ...
	I0815 23:28:22.057045   36125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:28:22.076068   36125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:28:22.088861   36125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:28:22.088923   36125 ssh_runner.go:195] Run: ls
	I0815 23:28:22.093573   36125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:28:22.098192   36125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:28:22.098218   36125 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:28:22.098228   36125 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:22.098247   36125 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:28:22.098529   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:22.098560   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:22.113088   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0815 23:28:22.113590   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:22.114169   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:22.114190   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:22.114502   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:22.114712   36125 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:28:22.116384   36125 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:28:22.116397   36125 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:28:22.116692   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:22.116743   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:22.131267   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0815 23:28:22.131774   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:22.132212   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:22.132230   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:22.132539   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:22.132742   36125 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:28:22.135430   36125 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:22.135846   36125 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:22.135873   36125 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:22.136001   36125 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:28:22.136296   36125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:22.136332   36125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:22.150872   36125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39797
	I0815 23:28:22.151295   36125 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:22.151773   36125 main.go:141] libmachine: Using API Version  1
	I0815 23:28:22.151799   36125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:22.152138   36125 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:22.152308   36125 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:28:22.152503   36125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:22.152524   36125 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:28:22.155169   36125 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:22.155570   36125 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:22.155595   36125 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:22.155760   36125 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:28:22.155903   36125 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:28:22.156042   36125 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:28:22.156149   36125 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:28:22.237429   36125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:22.252462   36125 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 7 (626.670929ms)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-175414-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:28:29.401750   36229 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:28:29.402052   36229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:28:29.402062   36229 out.go:358] Setting ErrFile to fd 2...
	I0815 23:28:29.402068   36229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:28:29.402242   36229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:28:29.402419   36229 out.go:352] Setting JSON to false
	I0815 23:28:29.402449   36229 mustload.go:65] Loading cluster: ha-175414
	I0815 23:28:29.402548   36229 notify.go:220] Checking for updates...
	I0815 23:28:29.402849   36229 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:28:29.402865   36229 status.go:255] checking status of ha-175414 ...
	I0815 23:28:29.403241   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.403300   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.419254   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0815 23:28:29.419661   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.420190   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.420209   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.420585   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.420814   36229 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:28:29.422201   36229 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:28:29.422217   36229 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:28:29.422572   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.422617   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.437775   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0815 23:28:29.438251   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.438693   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.438722   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.439010   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.439202   36229 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:28:29.441983   36229 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:29.442411   36229 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:28:29.442444   36229 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:29.442519   36229 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:28:29.442858   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.442900   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.457295   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41607
	I0815 23:28:29.457680   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.458123   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.458142   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.458419   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.458559   36229 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:28:29.458692   36229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:29.458720   36229 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:28:29.461409   36229 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:29.461871   36229 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:28:29.461896   36229 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:28:29.462063   36229 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:28:29.462232   36229 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:28:29.462377   36229 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:28:29.462493   36229 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:28:29.545989   36229 ssh_runner.go:195] Run: systemctl --version
	I0815 23:28:29.552435   36229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:29.578766   36229 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:28:29.578806   36229 api_server.go:166] Checking apiserver status ...
	I0815 23:28:29.578836   36229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:28:29.594912   36229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup
	W0815 23:28:29.605486   36229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:28:29.605537   36229 ssh_runner.go:195] Run: ls
	I0815 23:28:29.610262   36229 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:28:29.615354   36229 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:28:29.615382   36229 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:28:29.615394   36229 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:29.615426   36229 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:28:29.615886   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.615933   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.630671   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0815 23:28:29.631053   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.631499   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.631522   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.631843   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.632052   36229 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:28:29.633587   36229 status.go:330] ha-175414-m02 host status = "Stopped" (err=<nil>)
	I0815 23:28:29.633598   36229 status.go:343] host is not running, skipping remaining checks
	I0815 23:28:29.633604   36229 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:29.633622   36229 status.go:255] checking status of ha-175414-m03 ...
	I0815 23:28:29.633994   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.634032   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.648737   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46597
	I0815 23:28:29.649222   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.649685   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.649706   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.650007   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.650180   36229 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:28:29.651653   36229 status.go:330] ha-175414-m03 host status = "Running" (err=<nil>)
	I0815 23:28:29.651670   36229 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:28:29.651958   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.651987   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.666577   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0815 23:28:29.667008   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.667488   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.667512   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.667833   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.668026   36229 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:28:29.670614   36229 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:29.671036   36229 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:29.671059   36229 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:29.671191   36229 host.go:66] Checking if "ha-175414-m03" exists ...
	I0815 23:28:29.671499   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.671535   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.686361   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0815 23:28:29.686758   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.687202   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.687222   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.687544   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.687720   36229 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:28:29.687912   36229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:29.687934   36229 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:28:29.690366   36229 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:29.690733   36229 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:29.690760   36229 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:29.690898   36229 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:28:29.691082   36229 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:28:29.691223   36229 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:28:29.691351   36229 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:28:29.770637   36229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:29.788654   36229 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:28:29.788687   36229 api_server.go:166] Checking apiserver status ...
	I0815 23:28:29.788749   36229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:28:29.806372   36229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0815 23:28:29.819112   36229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:28:29.819176   36229 ssh_runner.go:195] Run: ls
	I0815 23:28:29.825023   36229 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:28:29.829438   36229 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:28:29.829463   36229 status.go:422] ha-175414-m03 apiserver status = Running (err=<nil>)
	I0815 23:28:29.829472   36229 status.go:257] ha-175414-m03 status: &{Name:ha-175414-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:28:29.829487   36229 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:28:29.829859   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.829903   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.844622   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0815 23:28:29.845077   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.845533   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.845553   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.845864   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.846072   36229 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:28:29.847651   36229 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:28:29.847664   36229 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:28:29.847965   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.848000   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.863489   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33193
	I0815 23:28:29.863927   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.864429   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.864459   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.864782   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.864959   36229 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:28:29.867608   36229 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:29.867980   36229 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:29.868026   36229 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:29.868119   36229 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:28:29.868434   36229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:29.868480   36229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:29.883627   36229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0815 23:28:29.884026   36229 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:29.884437   36229 main.go:141] libmachine: Using API Version  1
	I0815 23:28:29.884458   36229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:29.884774   36229 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:29.884978   36229 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:28:29.885193   36229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:28:29.885218   36229 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:28:29.887907   36229 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:29.888248   36229 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:29.888274   36229 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:29.888391   36229 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:28:29.888548   36229 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:28:29.888689   36229 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:28:29.888815   36229 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:28:29.970524   36229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:28:29.985933   36229 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-175414 -n ha-175414
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-175414 logs -n 25: (1.419396973s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414:/home/docker/cp-test_ha-175414-m03_ha-175414.txt                      |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414 sudo cat                                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414.txt                                |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m02:/home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m02 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04:/home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m04 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp testdata/cp-test.txt                                               | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414:/home/docker/cp-test_ha-175414-m04_ha-175414.txt                      |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414 sudo cat                                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414.txt                                |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m02:/home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m02 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03:/home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m03 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-175414 node stop m02 -v=7                                                    | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-175414 node start m02 -v=7                                                   | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:27 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:20:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:20:39.132234   30687 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:20:39.132484   30687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:20:39.132492   30687 out.go:358] Setting ErrFile to fd 2...
	I0815 23:20:39.132496   30687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:20:39.132654   30687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:20:39.133199   30687 out.go:352] Setting JSON to false
	I0815 23:20:39.134115   30687 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3739,"bootTime":1723760300,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:20:39.134173   30687 start.go:139] virtualization: kvm guest
	I0815 23:20:39.136302   30687 out.go:177] * [ha-175414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:20:39.138076   30687 notify.go:220] Checking for updates...
	I0815 23:20:39.138101   30687 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:20:39.139349   30687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:20:39.140547   30687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:20:39.141831   30687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:20:39.143082   30687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:20:39.144296   30687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:20:39.145648   30687 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:20:39.180551   30687 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 23:20:39.181708   30687 start.go:297] selected driver: kvm2
	I0815 23:20:39.181730   30687 start.go:901] validating driver "kvm2" against <nil>
	I0815 23:20:39.181741   30687 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:20:39.182442   30687 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:20:39.182539   30687 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:20:39.197281   30687 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:20:39.197328   30687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 23:20:39.197558   30687 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:20:39.197627   30687 cni.go:84] Creating CNI manager for ""
	I0815 23:20:39.197642   30687 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0815 23:20:39.197650   30687 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 23:20:39.197711   30687 start.go:340] cluster config:
	{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0815 23:20:39.197828   30687 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:20:39.199692   30687 out.go:177] * Starting "ha-175414" primary control-plane node in "ha-175414" cluster
	I0815 23:20:39.201029   30687 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:20:39.201061   30687 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:20:39.201069   30687 cache.go:56] Caching tarball of preloaded images
	I0815 23:20:39.201155   30687 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:20:39.201171   30687 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:20:39.201495   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:20:39.201517   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json: {Name:mk6e3969a695f5334d0a96f3c5a2e62b2ca895a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:20:39.201679   30687 start.go:360] acquireMachinesLock for ha-175414: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:20:39.201714   30687 start.go:364] duration metric: took 19.572µs to acquireMachinesLock for "ha-175414"
	I0815 23:20:39.201736   30687 start.go:93] Provisioning new machine with config: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:20:39.201811   30687 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 23:20:39.203457   30687 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 23:20:39.203585   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:20:39.203629   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:20:39.217904   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0815 23:20:39.218312   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:20:39.218784   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:20:39.218803   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:20:39.219049   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:20:39.219227   30687 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:20:39.219382   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:20:39.219535   30687 start.go:159] libmachine.API.Create for "ha-175414" (driver="kvm2")
	I0815 23:20:39.219562   30687 client.go:168] LocalClient.Create starting
	I0815 23:20:39.219596   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0815 23:20:39.219628   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:20:39.219651   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:20:39.219703   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0815 23:20:39.219719   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:20:39.219737   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:20:39.219754   30687 main.go:141] libmachine: Running pre-create checks...
	I0815 23:20:39.219764   30687 main.go:141] libmachine: (ha-175414) Calling .PreCreateCheck
	I0815 23:20:39.220095   30687 main.go:141] libmachine: (ha-175414) Calling .GetConfigRaw
	I0815 23:20:39.220478   30687 main.go:141] libmachine: Creating machine...
	I0815 23:20:39.220490   30687 main.go:141] libmachine: (ha-175414) Calling .Create
	I0815 23:20:39.220616   30687 main.go:141] libmachine: (ha-175414) Creating KVM machine...
	I0815 23:20:39.221863   30687 main.go:141] libmachine: (ha-175414) DBG | found existing default KVM network
	I0815 23:20:39.222527   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.222381   30710 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0815 23:20:39.222556   30687 main.go:141] libmachine: (ha-175414) DBG | created network xml: 
	I0815 23:20:39.222569   30687 main.go:141] libmachine: (ha-175414) DBG | <network>
	I0815 23:20:39.222577   30687 main.go:141] libmachine: (ha-175414) DBG |   <name>mk-ha-175414</name>
	I0815 23:20:39.222586   30687 main.go:141] libmachine: (ha-175414) DBG |   <dns enable='no'/>
	I0815 23:20:39.222592   30687 main.go:141] libmachine: (ha-175414) DBG |   
	I0815 23:20:39.222602   30687 main.go:141] libmachine: (ha-175414) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 23:20:39.222608   30687 main.go:141] libmachine: (ha-175414) DBG |     <dhcp>
	I0815 23:20:39.222615   30687 main.go:141] libmachine: (ha-175414) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 23:20:39.222623   30687 main.go:141] libmachine: (ha-175414) DBG |     </dhcp>
	I0815 23:20:39.222631   30687 main.go:141] libmachine: (ha-175414) DBG |   </ip>
	I0815 23:20:39.222647   30687 main.go:141] libmachine: (ha-175414) DBG |   
	I0815 23:20:39.222658   30687 main.go:141] libmachine: (ha-175414) DBG | </network>
	I0815 23:20:39.222673   30687 main.go:141] libmachine: (ha-175414) DBG | 
	I0815 23:20:39.227788   30687 main.go:141] libmachine: (ha-175414) DBG | trying to create private KVM network mk-ha-175414 192.168.39.0/24...
	I0815 23:20:39.292857   30687 main.go:141] libmachine: (ha-175414) DBG | private KVM network mk-ha-175414 192.168.39.0/24 created
	I0815 23:20:39.292884   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.292810   30710 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:20:39.292983   30687 main.go:141] libmachine: (ha-175414) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414 ...
	I0815 23:20:39.293022   30687 main.go:141] libmachine: (ha-175414) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:20:39.293049   30687 main.go:141] libmachine: (ha-175414) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 23:20:39.534225   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.534136   30710 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa...
	I0815 23:20:39.626298   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.626192   30710 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/ha-175414.rawdisk...
	I0815 23:20:39.626322   30687 main.go:141] libmachine: (ha-175414) DBG | Writing magic tar header
	I0815 23:20:39.626332   30687 main.go:141] libmachine: (ha-175414) DBG | Writing SSH key tar header
	I0815 23:20:39.626339   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:39.626322   30710 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414 ...
	I0815 23:20:39.626440   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414
	I0815 23:20:39.626477   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414 (perms=drwx------)
	I0815 23:20:39.626485   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0815 23:20:39.626495   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:20:39.626500   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0815 23:20:39.626510   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 23:20:39.626516   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home/jenkins
	I0815 23:20:39.626522   30687 main.go:141] libmachine: (ha-175414) DBG | Checking permissions on dir: /home
	I0815 23:20:39.626528   30687 main.go:141] libmachine: (ha-175414) DBG | Skipping /home - not owner
	I0815 23:20:39.626541   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0815 23:20:39.626552   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0815 23:20:39.626573   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0815 23:20:39.626581   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 23:20:39.626590   30687 main.go:141] libmachine: (ha-175414) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 23:20:39.626595   30687 main.go:141] libmachine: (ha-175414) Creating domain...
	I0815 23:20:39.627405   30687 main.go:141] libmachine: (ha-175414) define libvirt domain using xml: 
	I0815 23:20:39.627427   30687 main.go:141] libmachine: (ha-175414) <domain type='kvm'>
	I0815 23:20:39.627435   30687 main.go:141] libmachine: (ha-175414)   <name>ha-175414</name>
	I0815 23:20:39.627439   30687 main.go:141] libmachine: (ha-175414)   <memory unit='MiB'>2200</memory>
	I0815 23:20:39.627444   30687 main.go:141] libmachine: (ha-175414)   <vcpu>2</vcpu>
	I0815 23:20:39.627450   30687 main.go:141] libmachine: (ha-175414)   <features>
	I0815 23:20:39.627455   30687 main.go:141] libmachine: (ha-175414)     <acpi/>
	I0815 23:20:39.627459   30687 main.go:141] libmachine: (ha-175414)     <apic/>
	I0815 23:20:39.627473   30687 main.go:141] libmachine: (ha-175414)     <pae/>
	I0815 23:20:39.627481   30687 main.go:141] libmachine: (ha-175414)     
	I0815 23:20:39.627493   30687 main.go:141] libmachine: (ha-175414)   </features>
	I0815 23:20:39.627501   30687 main.go:141] libmachine: (ha-175414)   <cpu mode='host-passthrough'>
	I0815 23:20:39.627510   30687 main.go:141] libmachine: (ha-175414)   
	I0815 23:20:39.627517   30687 main.go:141] libmachine: (ha-175414)   </cpu>
	I0815 23:20:39.627523   30687 main.go:141] libmachine: (ha-175414)   <os>
	I0815 23:20:39.627534   30687 main.go:141] libmachine: (ha-175414)     <type>hvm</type>
	I0815 23:20:39.627542   30687 main.go:141] libmachine: (ha-175414)     <boot dev='cdrom'/>
	I0815 23:20:39.627546   30687 main.go:141] libmachine: (ha-175414)     <boot dev='hd'/>
	I0815 23:20:39.627551   30687 main.go:141] libmachine: (ha-175414)     <bootmenu enable='no'/>
	I0815 23:20:39.627558   30687 main.go:141] libmachine: (ha-175414)   </os>
	I0815 23:20:39.627563   30687 main.go:141] libmachine: (ha-175414)   <devices>
	I0815 23:20:39.627567   30687 main.go:141] libmachine: (ha-175414)     <disk type='file' device='cdrom'>
	I0815 23:20:39.627579   30687 main.go:141] libmachine: (ha-175414)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/boot2docker.iso'/>
	I0815 23:20:39.627592   30687 main.go:141] libmachine: (ha-175414)       <target dev='hdc' bus='scsi'/>
	I0815 23:20:39.627601   30687 main.go:141] libmachine: (ha-175414)       <readonly/>
	I0815 23:20:39.627607   30687 main.go:141] libmachine: (ha-175414)     </disk>
	I0815 23:20:39.627618   30687 main.go:141] libmachine: (ha-175414)     <disk type='file' device='disk'>
	I0815 23:20:39.627625   30687 main.go:141] libmachine: (ha-175414)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 23:20:39.627632   30687 main.go:141] libmachine: (ha-175414)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/ha-175414.rawdisk'/>
	I0815 23:20:39.627643   30687 main.go:141] libmachine: (ha-175414)       <target dev='hda' bus='virtio'/>
	I0815 23:20:39.627666   30687 main.go:141] libmachine: (ha-175414)     </disk>
	I0815 23:20:39.627689   30687 main.go:141] libmachine: (ha-175414)     <interface type='network'>
	I0815 23:20:39.627700   30687 main.go:141] libmachine: (ha-175414)       <source network='mk-ha-175414'/>
	I0815 23:20:39.627711   30687 main.go:141] libmachine: (ha-175414)       <model type='virtio'/>
	I0815 23:20:39.627720   30687 main.go:141] libmachine: (ha-175414)     </interface>
	I0815 23:20:39.627731   30687 main.go:141] libmachine: (ha-175414)     <interface type='network'>
	I0815 23:20:39.627744   30687 main.go:141] libmachine: (ha-175414)       <source network='default'/>
	I0815 23:20:39.627754   30687 main.go:141] libmachine: (ha-175414)       <model type='virtio'/>
	I0815 23:20:39.627780   30687 main.go:141] libmachine: (ha-175414)     </interface>
	I0815 23:20:39.627801   30687 main.go:141] libmachine: (ha-175414)     <serial type='pty'>
	I0815 23:20:39.627814   30687 main.go:141] libmachine: (ha-175414)       <target port='0'/>
	I0815 23:20:39.627828   30687 main.go:141] libmachine: (ha-175414)     </serial>
	I0815 23:20:39.627841   30687 main.go:141] libmachine: (ha-175414)     <console type='pty'>
	I0815 23:20:39.627854   30687 main.go:141] libmachine: (ha-175414)       <target type='serial' port='0'/>
	I0815 23:20:39.627867   30687 main.go:141] libmachine: (ha-175414)     </console>
	I0815 23:20:39.627878   30687 main.go:141] libmachine: (ha-175414)     <rng model='virtio'>
	I0815 23:20:39.627904   30687 main.go:141] libmachine: (ha-175414)       <backend model='random'>/dev/random</backend>
	I0815 23:20:39.627919   30687 main.go:141] libmachine: (ha-175414)     </rng>
	I0815 23:20:39.627930   30687 main.go:141] libmachine: (ha-175414)     
	I0815 23:20:39.627940   30687 main.go:141] libmachine: (ha-175414)     
	I0815 23:20:39.627949   30687 main.go:141] libmachine: (ha-175414)   </devices>
	I0815 23:20:39.627955   30687 main.go:141] libmachine: (ha-175414) </domain>
	I0815 23:20:39.627965   30687 main.go:141] libmachine: (ha-175414) 
	I0815 23:20:39.632318   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:03:37:1f in network default
	I0815 23:20:39.632914   30687 main.go:141] libmachine: (ha-175414) Ensuring networks are active...
	I0815 23:20:39.632944   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:39.633550   30687 main.go:141] libmachine: (ha-175414) Ensuring network default is active
	I0815 23:20:39.633879   30687 main.go:141] libmachine: (ha-175414) Ensuring network mk-ha-175414 is active
	I0815 23:20:39.634408   30687 main.go:141] libmachine: (ha-175414) Getting domain xml...
	I0815 23:20:39.635048   30687 main.go:141] libmachine: (ha-175414) Creating domain...
	I0815 23:20:40.841021   30687 main.go:141] libmachine: (ha-175414) Waiting to get IP...
	I0815 23:20:40.841732   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:40.842079   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:40.842100   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:40.842058   30710 retry.go:31] will retry after 195.088814ms: waiting for machine to come up
	I0815 23:20:41.038377   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:41.038675   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:41.038698   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:41.038627   30710 retry.go:31] will retry after 350.43297ms: waiting for machine to come up
	I0815 23:20:41.391114   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:41.391547   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:41.391574   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:41.391504   30710 retry.go:31] will retry after 346.192999ms: waiting for machine to come up
	I0815 23:20:41.738883   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:41.739310   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:41.739339   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:41.739259   30710 retry.go:31] will retry after 395.632919ms: waiting for machine to come up
	I0815 23:20:42.136722   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:42.137183   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:42.137211   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:42.137145   30710 retry.go:31] will retry after 640.154019ms: waiting for machine to come up
	I0815 23:20:42.779013   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:42.779527   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:42.779568   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:42.779489   30710 retry.go:31] will retry after 897.025784ms: waiting for machine to come up
	I0815 23:20:43.678800   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:43.679312   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:43.679358   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:43.679271   30710 retry.go:31] will retry after 1.071070056s: waiting for machine to come up
	I0815 23:20:44.752300   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:44.752783   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:44.752814   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:44.752732   30710 retry.go:31] will retry after 1.252527242s: waiting for machine to come up
	I0815 23:20:46.006923   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:46.007343   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:46.007369   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:46.007297   30710 retry.go:31] will retry after 1.860999961s: waiting for machine to come up
	I0815 23:20:47.870262   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:47.870687   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:47.870723   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:47.870649   30710 retry.go:31] will retry after 1.673749324s: waiting for machine to come up
	I0815 23:20:49.546472   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:49.546888   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:49.546915   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:49.546856   30710 retry.go:31] will retry after 1.873147128s: waiting for machine to come up
	I0815 23:20:51.423020   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:51.423549   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:51.423577   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:51.423500   30710 retry.go:31] will retry after 3.056668989s: waiting for machine to come up
	I0815 23:20:54.481416   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:54.481960   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:54.481982   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:54.481891   30710 retry.go:31] will retry after 4.021901294s: waiting for machine to come up
	I0815 23:20:58.507975   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:20:58.508455   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find current IP address of domain ha-175414 in network mk-ha-175414
	I0815 23:20:58.508502   30687 main.go:141] libmachine: (ha-175414) DBG | I0815 23:20:58.508428   30710 retry.go:31] will retry after 3.780383701s: waiting for machine to come up
	I0815 23:21:02.292116   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.292616   30687 main.go:141] libmachine: (ha-175414) Found IP for machine: 192.168.39.67
	I0815 23:21:02.292668   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has current primary IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.292682   30687 main.go:141] libmachine: (ha-175414) Reserving static IP address...
	I0815 23:21:02.293043   30687 main.go:141] libmachine: (ha-175414) DBG | unable to find host DHCP lease matching {name: "ha-175414", mac: "52:54:00:f0:98:13", ip: "192.168.39.67"} in network mk-ha-175414
	I0815 23:21:02.363118   30687 main.go:141] libmachine: (ha-175414) Reserved static IP address: 192.168.39.67
	I0815 23:21:02.363144   30687 main.go:141] libmachine: (ha-175414) Waiting for SSH to be available...
	I0815 23:21:02.363160   30687 main.go:141] libmachine: (ha-175414) DBG | Getting to WaitForSSH function...
	I0815 23:21:02.365565   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.366680   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.366803   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.367398   30687 main.go:141] libmachine: (ha-175414) DBG | Using SSH client type: external
	I0815 23:21:02.367417   30687 main.go:141] libmachine: (ha-175414) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa (-rw-------)
	I0815 23:21:02.367461   30687 main.go:141] libmachine: (ha-175414) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:21:02.367486   30687 main.go:141] libmachine: (ha-175414) DBG | About to run SSH command:
	I0815 23:21:02.367520   30687 main.go:141] libmachine: (ha-175414) DBG | exit 0
	I0815 23:21:02.494052   30687 main.go:141] libmachine: (ha-175414) DBG | SSH cmd err, output: <nil>: 
	I0815 23:21:02.494294   30687 main.go:141] libmachine: (ha-175414) KVM machine creation complete!
	I0815 23:21:02.494680   30687 main.go:141] libmachine: (ha-175414) Calling .GetConfigRaw
	I0815 23:21:02.495185   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:02.495410   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:02.495586   30687 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 23:21:02.495599   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:02.496803   30687 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 23:21:02.496816   30687 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 23:21:02.496822   30687 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 23:21:02.496827   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.498916   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.499207   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.499244   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.499311   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:02.499491   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.499626   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.499772   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:02.499899   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:02.500112   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:02.500126   30687 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 23:21:02.605241   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:21:02.605269   30687 main.go:141] libmachine: Detecting the provisioner...
	I0815 23:21:02.605279   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.608064   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.608413   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.608440   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.608558   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:02.608751   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.608949   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.609112   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:02.609282   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:02.609441   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:02.609452   30687 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 23:21:02.718593   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 23:21:02.718654   30687 main.go:141] libmachine: found compatible host: buildroot
	I0815 23:21:02.718664   30687 main.go:141] libmachine: Provisioning with buildroot...
	I0815 23:21:02.718676   30687 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:21:02.718967   30687 buildroot.go:166] provisioning hostname "ha-175414"
	I0815 23:21:02.719001   30687 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:21:02.719221   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.721638   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.722011   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.722037   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.722188   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:02.722351   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.722490   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.722637   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:02.722812   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:02.722962   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:02.722973   30687 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-175414 && echo "ha-175414" | sudo tee /etc/hostname
	I0815 23:21:02.844709   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414
	
	I0815 23:21:02.844754   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.847473   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.847800   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.847829   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.847980   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:02.848180   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.848321   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:02.848427   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:02.848548   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:02.848729   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:02.848751   30687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-175414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-175414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-175414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:21:02.967096   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:21:02.967125   30687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:21:02.967185   30687 buildroot.go:174] setting up certificates
	I0815 23:21:02.967200   30687 provision.go:84] configureAuth start
	I0815 23:21:02.967220   30687 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:21:02.967510   30687 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:21:02.969990   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.970366   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.970388   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.970606   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:02.972920   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.973267   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:02.973295   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:02.973369   30687 provision.go:143] copyHostCerts
	I0815 23:21:02.973400   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:21:02.973485   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:21:02.973509   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:21:02.973575   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:21:02.973663   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:21:02.973682   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:21:02.973686   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:21:02.973735   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:21:02.973792   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:21:02.973814   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:21:02.973820   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:21:02.973864   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:21:02.973942   30687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.ha-175414 san=[127.0.0.1 192.168.39.67 ha-175414 localhost minikube]
	I0815 23:21:03.246553   30687 provision.go:177] copyRemoteCerts
	I0815 23:21:03.246613   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:21:03.246633   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.249195   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.249489   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.249518   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.249716   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.249960   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.250101   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.250212   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:03.332109   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:21:03.332191   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:21:03.357349   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:21:03.357427   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0815 23:21:03.382778   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:21:03.382852   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 23:21:03.407683   30687 provision.go:87] duration metric: took 440.469279ms to configureAuth
	I0815 23:21:03.407710   30687 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:21:03.407922   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:03.407991   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.410375   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.410696   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.410723   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.410927   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.411105   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.411264   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.411374   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.411505   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:03.411661   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:03.411676   30687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:21:03.684024   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:21:03.684057   30687 main.go:141] libmachine: Checking connection to Docker...
	I0815 23:21:03.684068   30687 main.go:141] libmachine: (ha-175414) Calling .GetURL
	I0815 23:21:03.685193   30687 main.go:141] libmachine: (ha-175414) DBG | Using libvirt version 6000000
	I0815 23:21:03.687439   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.687738   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.687759   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.687927   30687 main.go:141] libmachine: Docker is up and running!
	I0815 23:21:03.687942   30687 main.go:141] libmachine: Reticulating splines...
	I0815 23:21:03.687948   30687 client.go:171] duration metric: took 24.468376965s to LocalClient.Create
	I0815 23:21:03.687969   30687 start.go:167] duration metric: took 24.468433657s to libmachine.API.Create "ha-175414"
	I0815 23:21:03.687981   30687 start.go:293] postStartSetup for "ha-175414" (driver="kvm2")
	I0815 23:21:03.687995   30687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:21:03.688010   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.688257   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:21:03.688281   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.690410   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.690752   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.690780   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.690961   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.691120   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.691250   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.691380   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:03.778909   30687 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:21:03.783412   30687 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:21:03.783441   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:21:03.783510   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:21:03.783601   30687 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:21:03.783613   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:21:03.783733   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:21:03.794334   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:21:03.819021   30687 start.go:296] duration metric: took 131.025603ms for postStartSetup
	I0815 23:21:03.819066   30687 main.go:141] libmachine: (ha-175414) Calling .GetConfigRaw
	I0815 23:21:03.819613   30687 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:21:03.822089   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.822354   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.822373   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.822601   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:21:03.822776   30687 start.go:128] duration metric: took 24.620953921s to createHost
	I0815 23:21:03.822794   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.825003   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.825359   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.825390   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.825454   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.825626   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.826109   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.826269   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.826442   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:03.826614   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:21:03.826628   30687 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:21:03.934709   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764063.912105974
	
	I0815 23:21:03.934737   30687 fix.go:216] guest clock: 1723764063.912105974
	I0815 23:21:03.934745   30687 fix.go:229] Guest: 2024-08-15 23:21:03.912105974 +0000 UTC Remote: 2024-08-15 23:21:03.822784949 +0000 UTC m=+24.724050572 (delta=89.321025ms)
	I0815 23:21:03.934763   30687 fix.go:200] guest clock delta is within tolerance: 89.321025ms
	I0815 23:21:03.934768   30687 start.go:83] releasing machines lock for "ha-175414", held for 24.733043179s
	I0815 23:21:03.934785   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.935067   30687 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:21:03.937686   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.938050   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.938080   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.938226   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.938727   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.938908   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:03.938986   30687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:21:03.939029   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.939125   30687 ssh_runner.go:195] Run: cat /version.json
	I0815 23:21:03.939144   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:03.941471   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.941727   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.941805   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.941830   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.941937   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.942039   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:03.942060   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:03.942106   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.942302   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:03.942312   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.942490   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:03.942509   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:03.942657   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:03.942815   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:04.023345   30687 ssh_runner.go:195] Run: systemctl --version
	I0815 23:21:04.044213   30687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:21:04.211504   30687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:21:04.217481   30687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:21:04.217560   30687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:21:04.235510   30687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 23:21:04.235537   30687 start.go:495] detecting cgroup driver to use...
	I0815 23:21:04.235603   30687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:21:04.252899   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:21:04.267198   30687 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:21:04.267246   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:21:04.281265   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:21:04.295754   30687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:21:04.415851   30687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:21:04.572466   30687 docker.go:233] disabling docker service ...
	I0815 23:21:04.572529   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:21:04.586435   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:21:04.599790   30687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:21:04.721646   30687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:21:04.842009   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:21:04.856666   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:21:04.875455   30687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:21:04.875524   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.885652   30687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:21:04.885719   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.895820   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.906250   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.916710   30687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:21:04.927500   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.938716   30687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.956186   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:04.966841   30687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:21:04.976627   30687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 23:21:04.976691   30687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 23:21:04.989636   30687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:21:04.999689   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:21:05.114749   30687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:21:05.252784   30687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:21:05.252856   30687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:21:05.258037   30687 start.go:563] Will wait 60s for crictl version
	I0815 23:21:05.258101   30687 ssh_runner.go:195] Run: which crictl
	I0815 23:21:05.262019   30687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:21:05.310161   30687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:21:05.310242   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:21:05.338380   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:21:05.368390   30687 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:21:05.369453   30687 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:21:05.371970   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:05.372254   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:05.372280   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:05.372457   30687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:21:05.376620   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:21:05.390218   30687 kubeadm.go:883] updating cluster {Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 23:21:05.390313   30687 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:21:05.390363   30687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:21:05.426809   30687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 23:21:05.426888   30687 ssh_runner.go:195] Run: which lz4
	I0815 23:21:05.430910   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0815 23:21:05.431000   30687 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 23:21:05.435499   30687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 23:21:05.435524   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 23:21:06.815675   30687 crio.go:462] duration metric: took 1.384702615s to copy over tarball
	I0815 23:21:06.815754   30687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 23:21:08.869910   30687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.054131365s)
	I0815 23:21:08.869942   30687 crio.go:469] duration metric: took 2.054241253s to extract the tarball
	I0815 23:21:08.869949   30687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 23:21:08.907690   30687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:21:08.952823   30687 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:21:08.952841   30687 cache_images.go:84] Images are preloaded, skipping loading
	I0815 23:21:08.952848   30687 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.0 crio true true} ...
	I0815 23:21:08.952994   30687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-175414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:21:08.953085   30687 ssh_runner.go:195] Run: crio config
	I0815 23:21:09.002052   30687 cni.go:84] Creating CNI manager for ""
	I0815 23:21:09.002073   30687 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 23:21:09.002083   30687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 23:21:09.002110   30687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-175414 NodeName:ha-175414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 23:21:09.002284   30687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-175414"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 23:21:09.002310   30687 kube-vip.go:115] generating kube-vip config ...
	I0815 23:21:09.002358   30687 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 23:21:09.019183   30687 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 23:21:09.019296   30687 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0815 23:21:09.019360   30687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:21:09.029784   30687 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 23:21:09.029863   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 23:21:09.039534   30687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0815 23:21:09.056501   30687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:21:09.073482   30687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0815 23:21:09.089735   30687 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0815 23:21:09.106335   30687 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 23:21:09.110310   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:21:09.122925   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:21:09.246127   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:21:09.263803   30687 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414 for IP: 192.168.39.67
	I0815 23:21:09.263822   30687 certs.go:194] generating shared ca certs ...
	I0815 23:21:09.263836   30687 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.264001   30687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:21:09.264074   30687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:21:09.264087   30687 certs.go:256] generating profile certs ...
	I0815 23:21:09.264187   30687 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key
	I0815 23:21:09.264214   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt with IP's: []
	I0815 23:21:09.320117   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt ...
	I0815 23:21:09.320142   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt: {Name:mkd1d68ac3a3761648f6241a5bda961db1b0339d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.320308   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key ...
	I0815 23:21:09.320319   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key: {Name:mkbb5a5c392511e6cda86c3a57e5cb385c0dab88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.320400   30687 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.20c82d28
	I0815 23:21:09.320428   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.20c82d28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.254]
	I0815 23:21:09.683881   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.20c82d28 ...
	I0815 23:21:09.683908   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.20c82d28: {Name:mkdedd169d9ef2899ccb567dcfb81c1c89a42da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.684062   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.20c82d28 ...
	I0815 23:21:09.684074   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.20c82d28: {Name:mkee298d112daeb0367b95864f61c25cb9dd721d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.684151   30687 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.20c82d28 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt
	I0815 23:21:09.684217   30687 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.20c82d28 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key
	I0815 23:21:09.684268   30687 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key
	I0815 23:21:09.684281   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt with IP's: []
	I0815 23:21:09.860951   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt ...
	I0815 23:21:09.860983   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt: {Name:mkc8b77b93ca3212f3e604b092660415423e7e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.861154   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key ...
	I0815 23:21:09.861166   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key: {Name:mkd60f00950a94e9b4a75caa9bd3e4a6d1de8348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:09.861235   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:21:09.861251   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:21:09.861262   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:21:09.861275   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:21:09.861286   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:21:09.861300   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:21:09.861313   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:21:09.861325   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:21:09.861371   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:21:09.861408   30687 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:21:09.861418   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:21:09.861477   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:21:09.861505   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:21:09.861526   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:21:09.861563   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:21:09.861589   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:21:09.861604   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:09.861616   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:21:09.862152   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:21:09.888860   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:21:09.914161   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:21:09.939456   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:21:09.965239   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 23:21:09.990555   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 23:21:10.022396   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:21:10.050838   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:21:10.086066   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:21:10.111709   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:21:10.137236   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:21:10.162745   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 23:21:10.188830   30687 ssh_runner.go:195] Run: openssl version
	I0815 23:21:10.195631   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:21:10.207281   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:21:10.212435   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:21:10.212494   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:21:10.219492   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:21:10.231179   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:21:10.242962   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:10.247439   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:10.247508   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:10.253224   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:21:10.264885   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:21:10.276294   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:21:10.280825   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:21:10.280890   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:21:10.287542   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:21:10.300908   30687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:21:10.305355   30687 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 23:21:10.305404   30687 kubeadm.go:392] StartCluster: {Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:21:10.305470   30687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 23:21:10.305507   30687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 23:21:10.343742   30687 cri.go:89] found id: ""
	I0815 23:21:10.343809   30687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 23:21:10.356051   30687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 23:21:10.366669   30687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 23:21:10.377294   30687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 23:21:10.377315   30687 kubeadm.go:157] found existing configuration files:
	
	I0815 23:21:10.377358   30687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 23:21:10.387368   30687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 23:21:10.387429   30687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 23:21:10.397415   30687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 23:21:10.407268   30687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 23:21:10.407329   30687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 23:21:10.417956   30687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 23:21:10.427875   30687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 23:21:10.427934   30687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 23:21:10.438137   30687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 23:21:10.448102   30687 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 23:21:10.448151   30687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 23:21:10.458412   30687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 23:21:10.575205   30687 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 23:21:10.575285   30687 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 23:21:10.704641   30687 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 23:21:10.704922   30687 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 23:21:10.705075   30687 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 23:21:10.717110   30687 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 23:21:10.777741   30687 out.go:235]   - Generating certificates and keys ...
	I0815 23:21:10.777907   30687 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 23:21:10.777973   30687 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 23:21:10.809009   30687 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 23:21:11.174417   30687 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 23:21:11.336144   30687 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 23:21:11.502745   30687 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 23:21:11.621432   30687 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 23:21:11.621744   30687 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-175414 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0815 23:21:11.840088   30687 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 23:21:11.840306   30687 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-175414 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0815 23:21:11.982660   30687 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 23:21:12.157923   30687 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 23:21:12.264631   30687 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 23:21:12.264872   30687 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 23:21:12.400847   30687 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 23:21:12.624721   30687 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 23:21:12.804857   30687 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 23:21:13.035081   30687 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 23:21:13.117127   30687 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 23:21:13.117749   30687 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 23:21:13.123359   30687 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 23:21:13.126175   30687 out.go:235]   - Booting up control plane ...
	I0815 23:21:13.126279   30687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 23:21:13.126349   30687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 23:21:13.126408   30687 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 23:21:13.142935   30687 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 23:21:13.149543   30687 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 23:21:13.149609   30687 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 23:21:13.281858   30687 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 23:21:13.282000   30687 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 23:21:13.784551   30687 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.916048ms
	I0815 23:21:13.784696   30687 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 23:21:19.785185   30687 kubeadm.go:310] [api-check] The API server is healthy after 6.003512006s
	I0815 23:21:19.805524   30687 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 23:21:19.819540   30687 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 23:21:20.354210   30687 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 23:21:20.354401   30687 kubeadm.go:310] [mark-control-plane] Marking the node ha-175414 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 23:21:20.372454   30687 kubeadm.go:310] [bootstrap-token] Using token: dntkld.gr81o1hgvvlllskg
	I0815 23:21:20.373930   30687 out.go:235]   - Configuring RBAC rules ...
	I0815 23:21:20.374037   30687 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 23:21:20.385231   30687 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 23:21:20.407460   30687 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 23:21:20.411925   30687 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 23:21:20.418358   30687 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 23:21:20.423618   30687 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 23:21:20.443218   30687 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 23:21:20.783008   30687 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 23:21:21.193144   30687 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 23:21:21.194166   30687 kubeadm.go:310] 
	I0815 23:21:21.194244   30687 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 23:21:21.194254   30687 kubeadm.go:310] 
	I0815 23:21:21.194349   30687 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 23:21:21.194374   30687 kubeadm.go:310] 
	I0815 23:21:21.194421   30687 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 23:21:21.194482   30687 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 23:21:21.194528   30687 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 23:21:21.194543   30687 kubeadm.go:310] 
	I0815 23:21:21.194627   30687 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 23:21:21.194637   30687 kubeadm.go:310] 
	I0815 23:21:21.194700   30687 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 23:21:21.194709   30687 kubeadm.go:310] 
	I0815 23:21:21.194781   30687 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 23:21:21.194878   30687 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 23:21:21.194948   30687 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 23:21:21.194955   30687 kubeadm.go:310] 
	I0815 23:21:21.195025   30687 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 23:21:21.195103   30687 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 23:21:21.195114   30687 kubeadm.go:310] 
	I0815 23:21:21.195208   30687 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dntkld.gr81o1hgvvlllskg \
	I0815 23:21:21.195343   30687 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0815 23:21:21.195377   30687 kubeadm.go:310] 	--control-plane 
	I0815 23:21:21.195386   30687 kubeadm.go:310] 
	I0815 23:21:21.195499   30687 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 23:21:21.195514   30687 kubeadm.go:310] 
	I0815 23:21:21.195626   30687 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dntkld.gr81o1hgvvlllskg \
	I0815 23:21:21.195764   30687 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0815 23:21:21.196881   30687 kubeadm.go:310] W0815 23:21:10.556546     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 23:21:21.197276   30687 kubeadm.go:310] W0815 23:21:10.557539     852 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 23:21:21.197416   30687 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 23:21:21.197446   30687 cni.go:84] Creating CNI manager for ""
	I0815 23:21:21.197458   30687 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 23:21:21.200088   30687 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 23:21:21.201397   30687 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 23:21:21.206932   30687 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 23:21:21.206953   30687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 23:21:21.232482   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 23:21:21.647423   30687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 23:21:21.647526   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:21.647530   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-175414 minikube.k8s.io/updated_at=2024_08_15T23_21_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=ha-175414 minikube.k8s.io/primary=true
	I0815 23:21:21.672169   30687 ops.go:34] apiserver oom_adj: -16
	I0815 23:21:21.815053   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:22.315198   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:22.815789   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:23.315041   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:23.816127   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:24.315181   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:24.816078   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:25.315642   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 23:21:25.424471   30687 kubeadm.go:1113] duration metric: took 3.777007269s to wait for elevateKubeSystemPrivileges
	I0815 23:21:25.424504   30687 kubeadm.go:394] duration metric: took 15.11910366s to StartCluster
	I0815 23:21:25.424526   30687 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:25.424595   30687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:21:25.425176   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:25.425384   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 23:21:25.425386   30687 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:21:25.425404   30687 start.go:241] waiting for startup goroutines ...
	I0815 23:21:25.425417   30687 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 23:21:25.425507   30687 addons.go:69] Setting storage-provisioner=true in profile "ha-175414"
	I0815 23:21:25.425514   30687 addons.go:69] Setting default-storageclass=true in profile "ha-175414"
	I0815 23:21:25.425547   30687 addons.go:234] Setting addon storage-provisioner=true in "ha-175414"
	I0815 23:21:25.425545   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:25.425557   30687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-175414"
	I0815 23:21:25.425579   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:21:25.426050   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.426050   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.426085   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.426100   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.440949   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
	I0815 23:21:25.441245   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
	I0815 23:21:25.441438   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.441604   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.441945   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.441961   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.442103   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.442134   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.442257   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.442395   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:25.442428   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.442954   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.442999   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.444609   30687 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:21:25.444943   30687 kapi.go:59] client config for ha-175414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key", CAFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 23:21:25.445397   30687 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 23:21:25.445710   30687 addons.go:234] Setting addon default-storageclass=true in "ha-175414"
	I0815 23:21:25.445752   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:21:25.446143   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.446186   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.457427   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0815 23:21:25.457859   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.458333   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.458355   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.458658   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.458857   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:25.460360   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:25.461000   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0815 23:21:25.461383   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.461798   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.461814   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.462103   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.462419   30687 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 23:21:25.462552   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:25.462577   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:25.463676   30687 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 23:21:25.463690   30687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 23:21:25.463703   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:25.466871   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:25.467301   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:25.467326   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:25.467494   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:25.467660   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:25.467813   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:25.467933   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:25.478098   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0815 23:21:25.478410   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:25.478829   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:25.478850   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:25.479138   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:25.479297   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:25.480685   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:25.480874   30687 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 23:21:25.480888   30687 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 23:21:25.480905   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:25.483386   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:25.483724   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:25.483751   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:25.483880   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:25.484034   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:25.484165   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:25.484297   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:25.540010   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 23:21:25.607050   30687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 23:21:25.687685   30687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 23:21:25.863648   30687 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 23:21:26.279811   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.279832   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.280175   30687 main.go:141] libmachine: (ha-175414) DBG | Closing plugin on server side
	I0815 23:21:26.280227   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.280243   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.280256   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.280264   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.280474   30687 main.go:141] libmachine: (ha-175414) DBG | Closing plugin on server side
	I0815 23:21:26.280474   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.280494   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.280530   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.280550   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.280545   30687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 23:21:26.280615   30687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 23:21:26.280713   30687 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0815 23:21:26.280729   30687 round_trippers.go:469] Request Headers:
	I0815 23:21:26.280738   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:21:26.280742   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:21:26.280777   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.280792   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.280800   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.280808   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.281001   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.281015   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.308678   30687 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0815 23:21:26.309209   30687 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0815 23:21:26.309225   30687 round_trippers.go:469] Request Headers:
	I0815 23:21:26.309235   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:21:26.309240   30687 round_trippers.go:473]     Content-Type: application/json
	I0815 23:21:26.309245   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:21:26.313823   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:21:26.314106   30687 main.go:141] libmachine: Making call to close driver server
	I0815 23:21:26.314121   30687 main.go:141] libmachine: (ha-175414) Calling .Close
	I0815 23:21:26.314417   30687 main.go:141] libmachine: Successfully made call to close driver server
	I0815 23:21:26.314433   30687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 23:21:26.316428   30687 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0815 23:21:26.317782   30687 addons.go:510] duration metric: took 892.367472ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0815 23:21:26.317815   30687 start.go:246] waiting for cluster config update ...
	I0815 23:21:26.317836   30687 start.go:255] writing updated cluster config ...
	I0815 23:21:26.319656   30687 out.go:201] 
	I0815 23:21:26.321129   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:26.321199   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:21:26.322990   30687 out.go:177] * Starting "ha-175414-m02" control-plane node in "ha-175414" cluster
	I0815 23:21:26.324296   30687 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:21:26.324316   30687 cache.go:56] Caching tarball of preloaded images
	I0815 23:21:26.324408   30687 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:21:26.324422   30687 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:21:26.324480   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:21:26.324632   30687 start.go:360] acquireMachinesLock for ha-175414-m02: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:21:26.324673   30687 start.go:364] duration metric: took 21.951µs to acquireMachinesLock for "ha-175414-m02"
	I0815 23:21:26.324694   30687 start.go:93] Provisioning new machine with config: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:21:26.324765   30687 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0815 23:21:26.326550   30687 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 23:21:26.326626   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:26.326649   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:26.341201   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0815 23:21:26.341635   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:26.342246   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:26.342270   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:26.342629   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:26.342937   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetMachineName
	I0815 23:21:26.343118   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:26.343297   30687 start.go:159] libmachine.API.Create for "ha-175414" (driver="kvm2")
	I0815 23:21:26.343323   30687 client.go:168] LocalClient.Create starting
	I0815 23:21:26.343359   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0815 23:21:26.343401   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:21:26.343421   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:21:26.343487   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0815 23:21:26.343513   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:21:26.343529   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:21:26.343552   30687 main.go:141] libmachine: Running pre-create checks...
	I0815 23:21:26.343563   30687 main.go:141] libmachine: (ha-175414-m02) Calling .PreCreateCheck
	I0815 23:21:26.343722   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetConfigRaw
	I0815 23:21:26.344139   30687 main.go:141] libmachine: Creating machine...
	I0815 23:21:26.344155   30687 main.go:141] libmachine: (ha-175414-m02) Calling .Create
	I0815 23:21:26.344282   30687 main.go:141] libmachine: (ha-175414-m02) Creating KVM machine...
	I0815 23:21:26.345587   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found existing default KVM network
	I0815 23:21:26.345727   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found existing private KVM network mk-ha-175414
	I0815 23:21:26.345866   30687 main.go:141] libmachine: (ha-175414-m02) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02 ...
	I0815 23:21:26.345890   30687 main.go:141] libmachine: (ha-175414-m02) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:21:26.345952   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:26.345831   31039 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:21:26.346061   30687 main.go:141] libmachine: (ha-175414-m02) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 23:21:26.604260   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:26.604134   31039 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa...
	I0815 23:21:26.747993   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:26.747888   31039 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/ha-175414-m02.rawdisk...
	I0815 23:21:26.748025   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Writing magic tar header
	I0815 23:21:26.748041   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Writing SSH key tar header
	I0815 23:21:26.748053   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:26.748013   31039 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02 ...
	I0815 23:21:26.748135   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02
	I0815 23:21:26.748160   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0815 23:21:26.748174   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02 (perms=drwx------)
	I0815 23:21:26.748188   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0815 23:21:26.748200   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:21:26.748217   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0815 23:21:26.748227   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 23:21:26.748258   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home/jenkins
	I0815 23:21:26.748284   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0815 23:21:26.748293   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Checking permissions on dir: /home
	I0815 23:21:26.748308   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Skipping /home - not owner
	I0815 23:21:26.748321   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0815 23:21:26.748330   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 23:21:26.748340   30687 main.go:141] libmachine: (ha-175414-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 23:21:26.748354   30687 main.go:141] libmachine: (ha-175414-m02) Creating domain...
	I0815 23:21:26.749332   30687 main.go:141] libmachine: (ha-175414-m02) define libvirt domain using xml: 
	I0815 23:21:26.749355   30687 main.go:141] libmachine: (ha-175414-m02) <domain type='kvm'>
	I0815 23:21:26.749365   30687 main.go:141] libmachine: (ha-175414-m02)   <name>ha-175414-m02</name>
	I0815 23:21:26.749378   30687 main.go:141] libmachine: (ha-175414-m02)   <memory unit='MiB'>2200</memory>
	I0815 23:21:26.749389   30687 main.go:141] libmachine: (ha-175414-m02)   <vcpu>2</vcpu>
	I0815 23:21:26.749398   30687 main.go:141] libmachine: (ha-175414-m02)   <features>
	I0815 23:21:26.749408   30687 main.go:141] libmachine: (ha-175414-m02)     <acpi/>
	I0815 23:21:26.749415   30687 main.go:141] libmachine: (ha-175414-m02)     <apic/>
	I0815 23:21:26.749427   30687 main.go:141] libmachine: (ha-175414-m02)     <pae/>
	I0815 23:21:26.749436   30687 main.go:141] libmachine: (ha-175414-m02)     
	I0815 23:21:26.749448   30687 main.go:141] libmachine: (ha-175414-m02)   </features>
	I0815 23:21:26.749455   30687 main.go:141] libmachine: (ha-175414-m02)   <cpu mode='host-passthrough'>
	I0815 23:21:26.749482   30687 main.go:141] libmachine: (ha-175414-m02)   
	I0815 23:21:26.749497   30687 main.go:141] libmachine: (ha-175414-m02)   </cpu>
	I0815 23:21:26.749511   30687 main.go:141] libmachine: (ha-175414-m02)   <os>
	I0815 23:21:26.749522   30687 main.go:141] libmachine: (ha-175414-m02)     <type>hvm</type>
	I0815 23:21:26.749534   30687 main.go:141] libmachine: (ha-175414-m02)     <boot dev='cdrom'/>
	I0815 23:21:26.749544   30687 main.go:141] libmachine: (ha-175414-m02)     <boot dev='hd'/>
	I0815 23:21:26.749561   30687 main.go:141] libmachine: (ha-175414-m02)     <bootmenu enable='no'/>
	I0815 23:21:26.749575   30687 main.go:141] libmachine: (ha-175414-m02)   </os>
	I0815 23:21:26.749583   30687 main.go:141] libmachine: (ha-175414-m02)   <devices>
	I0815 23:21:26.749592   30687 main.go:141] libmachine: (ha-175414-m02)     <disk type='file' device='cdrom'>
	I0815 23:21:26.749607   30687 main.go:141] libmachine: (ha-175414-m02)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/boot2docker.iso'/>
	I0815 23:21:26.749618   30687 main.go:141] libmachine: (ha-175414-m02)       <target dev='hdc' bus='scsi'/>
	I0815 23:21:26.749628   30687 main.go:141] libmachine: (ha-175414-m02)       <readonly/>
	I0815 23:21:26.749638   30687 main.go:141] libmachine: (ha-175414-m02)     </disk>
	I0815 23:21:26.749653   30687 main.go:141] libmachine: (ha-175414-m02)     <disk type='file' device='disk'>
	I0815 23:21:26.749670   30687 main.go:141] libmachine: (ha-175414-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 23:21:26.749687   30687 main.go:141] libmachine: (ha-175414-m02)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/ha-175414-m02.rawdisk'/>
	I0815 23:21:26.749698   30687 main.go:141] libmachine: (ha-175414-m02)       <target dev='hda' bus='virtio'/>
	I0815 23:21:26.749709   30687 main.go:141] libmachine: (ha-175414-m02)     </disk>
	I0815 23:21:26.749716   30687 main.go:141] libmachine: (ha-175414-m02)     <interface type='network'>
	I0815 23:21:26.749723   30687 main.go:141] libmachine: (ha-175414-m02)       <source network='mk-ha-175414'/>
	I0815 23:21:26.749733   30687 main.go:141] libmachine: (ha-175414-m02)       <model type='virtio'/>
	I0815 23:21:26.749741   30687 main.go:141] libmachine: (ha-175414-m02)     </interface>
	I0815 23:21:26.749749   30687 main.go:141] libmachine: (ha-175414-m02)     <interface type='network'>
	I0815 23:21:26.749757   30687 main.go:141] libmachine: (ha-175414-m02)       <source network='default'/>
	I0815 23:21:26.749761   30687 main.go:141] libmachine: (ha-175414-m02)       <model type='virtio'/>
	I0815 23:21:26.749772   30687 main.go:141] libmachine: (ha-175414-m02)     </interface>
	I0815 23:21:26.749777   30687 main.go:141] libmachine: (ha-175414-m02)     <serial type='pty'>
	I0815 23:21:26.749784   30687 main.go:141] libmachine: (ha-175414-m02)       <target port='0'/>
	I0815 23:21:26.749788   30687 main.go:141] libmachine: (ha-175414-m02)     </serial>
	I0815 23:21:26.749794   30687 main.go:141] libmachine: (ha-175414-m02)     <console type='pty'>
	I0815 23:21:26.749799   30687 main.go:141] libmachine: (ha-175414-m02)       <target type='serial' port='0'/>
	I0815 23:21:26.749804   30687 main.go:141] libmachine: (ha-175414-m02)     </console>
	I0815 23:21:26.749809   30687 main.go:141] libmachine: (ha-175414-m02)     <rng model='virtio'>
	I0815 23:21:26.749815   30687 main.go:141] libmachine: (ha-175414-m02)       <backend model='random'>/dev/random</backend>
	I0815 23:21:26.749823   30687 main.go:141] libmachine: (ha-175414-m02)     </rng>
	I0815 23:21:26.749833   30687 main.go:141] libmachine: (ha-175414-m02)     
	I0815 23:21:26.749851   30687 main.go:141] libmachine: (ha-175414-m02)     
	I0815 23:21:26.749865   30687 main.go:141] libmachine: (ha-175414-m02)   </devices>
	I0815 23:21:26.749881   30687 main.go:141] libmachine: (ha-175414-m02) </domain>
	I0815 23:21:26.749892   30687 main.go:141] libmachine: (ha-175414-m02) 
	I0815 23:21:26.756597   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:05:fa:e3 in network default
	I0815 23:21:26.757244   30687 main.go:141] libmachine: (ha-175414-m02) Ensuring networks are active...
	I0815 23:21:26.757271   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:26.758064   30687 main.go:141] libmachine: (ha-175414-m02) Ensuring network default is active
	I0815 23:21:26.758418   30687 main.go:141] libmachine: (ha-175414-m02) Ensuring network mk-ha-175414 is active
	I0815 23:21:26.758877   30687 main.go:141] libmachine: (ha-175414-m02) Getting domain xml...
	I0815 23:21:26.759855   30687 main.go:141] libmachine: (ha-175414-m02) Creating domain...
	I0815 23:21:28.013030   30687 main.go:141] libmachine: (ha-175414-m02) Waiting to get IP...
	I0815 23:21:28.013743   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:28.014200   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:28.014249   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:28.014192   31039 retry.go:31] will retry after 225.305823ms: waiting for machine to come up
	I0815 23:21:28.241736   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:28.242241   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:28.242274   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:28.242190   31039 retry.go:31] will retry after 251.988652ms: waiting for machine to come up
	I0815 23:21:28.495601   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:28.496087   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:28.496114   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:28.496054   31039 retry.go:31] will retry after 437.060646ms: waiting for machine to come up
	I0815 23:21:28.934522   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:28.935040   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:28.935067   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:28.934984   31039 retry.go:31] will retry after 464.445073ms: waiting for machine to come up
	I0815 23:21:29.401028   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:29.401961   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:29.401982   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:29.401913   31039 retry.go:31] will retry after 530.494313ms: waiting for machine to come up
	I0815 23:21:29.933553   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:29.933978   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:29.934006   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:29.933949   31039 retry.go:31] will retry after 641.182632ms: waiting for machine to come up
	I0815 23:21:30.576770   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:30.577186   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:30.577214   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:30.577154   31039 retry.go:31] will retry after 895.397592ms: waiting for machine to come up
	I0815 23:21:31.474027   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:31.474548   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:31.474581   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:31.474479   31039 retry.go:31] will retry after 1.179069294s: waiting for machine to come up
	I0815 23:21:32.655638   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:32.656123   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:32.656150   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:32.656088   31039 retry.go:31] will retry after 1.458887896s: waiting for machine to come up
	I0815 23:21:34.116818   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:34.117301   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:34.117325   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:34.117257   31039 retry.go:31] will retry after 1.696682837s: waiting for machine to come up
	I0815 23:21:35.816124   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:35.816725   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:35.816752   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:35.816660   31039 retry.go:31] will retry after 2.009785233s: waiting for machine to come up
	I0815 23:21:37.828384   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:37.828788   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:37.828817   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:37.828737   31039 retry.go:31] will retry after 3.146592515s: waiting for machine to come up
	I0815 23:21:40.978898   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:40.979296   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:40.979320   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:40.979241   31039 retry.go:31] will retry after 2.776399607s: waiting for machine to come up
	I0815 23:21:43.758501   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:43.758923   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find current IP address of domain ha-175414-m02 in network mk-ha-175414
	I0815 23:21:43.758946   30687 main.go:141] libmachine: (ha-175414-m02) DBG | I0815 23:21:43.758886   31039 retry.go:31] will retry after 4.758298763s: waiting for machine to come up
	I0815 23:21:48.520002   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.520447   30687 main.go:141] libmachine: (ha-175414-m02) Found IP for machine: 192.168.39.19
	I0815 23:21:48.520466   30687 main.go:141] libmachine: (ha-175414-m02) Reserving static IP address...
	I0815 23:21:48.520479   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has current primary IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.520816   30687 main.go:141] libmachine: (ha-175414-m02) DBG | unable to find host DHCP lease matching {name: "ha-175414-m02", mac: "52:54:00:3f:bf:67", ip: "192.168.39.19"} in network mk-ha-175414
	I0815 23:21:48.592403   30687 main.go:141] libmachine: (ha-175414-m02) Reserved static IP address: 192.168.39.19
	I0815 23:21:48.592434   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Getting to WaitForSSH function...
	I0815 23:21:48.592443   30687 main.go:141] libmachine: (ha-175414-m02) Waiting for SSH to be available...
	I0815 23:21:48.595218   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.595698   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:48.595728   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.595888   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Using SSH client type: external
	I0815 23:21:48.595911   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa (-rw-------)
	I0815 23:21:48.595941   30687 main.go:141] libmachine: (ha-175414-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:21:48.595954   30687 main.go:141] libmachine: (ha-175414-m02) DBG | About to run SSH command:
	I0815 23:21:48.595967   30687 main.go:141] libmachine: (ha-175414-m02) DBG | exit 0
	I0815 23:21:48.725957   30687 main.go:141] libmachine: (ha-175414-m02) DBG | SSH cmd err, output: <nil>: 
	I0815 23:21:48.726223   30687 main.go:141] libmachine: (ha-175414-m02) KVM machine creation complete!
	I0815 23:21:48.726537   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetConfigRaw
	I0815 23:21:48.727043   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:48.727249   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:48.727391   30687 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 23:21:48.727406   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:21:48.728641   30687 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 23:21:48.728653   30687 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 23:21:48.728658   30687 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 23:21:48.728666   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:48.730629   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.730945   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:48.730983   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.731145   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:48.731320   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.731459   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.731574   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:48.731722   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:48.731965   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:48.731979   30687 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 23:21:48.845468   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:21:48.845491   30687 main.go:141] libmachine: Detecting the provisioner...
	I0815 23:21:48.845499   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:48.848642   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.849140   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:48.849167   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.849324   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:48.849529   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.849692   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.849873   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:48.850060   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:48.850222   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:48.850233   30687 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 23:21:48.962809   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 23:21:48.962857   30687 main.go:141] libmachine: found compatible host: buildroot
	I0815 23:21:48.962864   30687 main.go:141] libmachine: Provisioning with buildroot...
	I0815 23:21:48.962871   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetMachineName
	I0815 23:21:48.963237   30687 buildroot.go:166] provisioning hostname "ha-175414-m02"
	I0815 23:21:48.963267   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetMachineName
	I0815 23:21:48.963457   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:48.966351   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.966740   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:48.966766   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:48.966866   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:48.967052   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.967199   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:48.967327   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:48.967460   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:48.967653   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:48.967670   30687 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-175414-m02 && echo "ha-175414-m02" | sudo tee /etc/hostname
	I0815 23:21:49.097358   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414-m02
	
	I0815 23:21:49.097399   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.099970   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.100296   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.100323   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.100652   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.100826   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.101009   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.101147   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.101325   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:49.101532   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:49.101549   30687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-175414-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-175414-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-175414-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:21:49.223309   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:21:49.223337   30687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:21:49.223356   30687 buildroot.go:174] setting up certificates
	I0815 23:21:49.223369   30687 provision.go:84] configureAuth start
	I0815 23:21:49.223382   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetMachineName
	I0815 23:21:49.223658   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:21:49.226551   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.226912   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.226937   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.227060   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.229486   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.229820   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.229858   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.230009   30687 provision.go:143] copyHostCerts
	I0815 23:21:49.230034   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:21:49.230062   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:21:49.230070   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:21:49.230157   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:21:49.230229   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:21:49.230246   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:21:49.230252   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:21:49.230279   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:21:49.230321   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:21:49.230337   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:21:49.230344   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:21:49.230363   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:21:49.230411   30687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.ha-175414-m02 san=[127.0.0.1 192.168.39.19 ha-175414-m02 localhost minikube]
	I0815 23:21:49.393186   30687 provision.go:177] copyRemoteCerts
	I0815 23:21:49.393242   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:21:49.393262   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.396221   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.396535   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.396564   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.396718   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.396904   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.397131   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.397244   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:21:49.486103   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:21:49.486171   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:21:49.511552   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:21:49.511621   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 23:21:49.535771   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:21:49.535862   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 23:21:49.559512   30687 provision.go:87] duration metric: took 336.130825ms to configureAuth
	I0815 23:21:49.559545   30687 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:21:49.559771   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:49.559852   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.562400   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.562773   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.562795   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.562975   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.563175   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.563341   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.563454   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.563587   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:49.563763   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:49.563777   30687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:21:49.836024   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:21:49.836053   30687 main.go:141] libmachine: Checking connection to Docker...
	I0815 23:21:49.836163   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetURL
	I0815 23:21:49.837426   30687 main.go:141] libmachine: (ha-175414-m02) DBG | Using libvirt version 6000000
	I0815 23:21:49.839557   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.839847   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.839868   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.840024   30687 main.go:141] libmachine: Docker is up and running!
	I0815 23:21:49.840039   30687 main.go:141] libmachine: Reticulating splines...
	I0815 23:21:49.840045   30687 client.go:171] duration metric: took 23.496715133s to LocalClient.Create
	I0815 23:21:49.840065   30687 start.go:167] duration metric: took 23.496770406s to libmachine.API.Create "ha-175414"
	I0815 23:21:49.840073   30687 start.go:293] postStartSetup for "ha-175414-m02" (driver="kvm2")
	I0815 23:21:49.840082   30687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:21:49.840097   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:49.840318   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:21:49.840336   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.842471   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.842793   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.842823   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.842919   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.843081   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.843221   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.843374   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:21:49.929224   30687 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:21:49.933955   30687 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:21:49.933984   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:21:49.934053   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:21:49.934140   30687 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:21:49.934152   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:21:49.934256   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:21:49.946114   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:21:49.971857   30687 start.go:296] duration metric: took 131.770868ms for postStartSetup
	I0815 23:21:49.971911   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetConfigRaw
	I0815 23:21:49.972472   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:21:49.974970   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.975569   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.975593   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.975871   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:21:49.976076   30687 start.go:128] duration metric: took 23.65129957s to createHost
	I0815 23:21:49.976105   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:49.978338   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.978674   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:49.978709   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:49.978853   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:49.979020   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.979141   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:49.979279   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:49.979459   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:21:49.979629   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0815 23:21:49.979642   30687 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:21:50.094933   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764110.070564150
	
	I0815 23:21:50.094952   30687 fix.go:216] guest clock: 1723764110.070564150
	I0815 23:21:50.094958   30687 fix.go:229] Guest: 2024-08-15 23:21:50.07056415 +0000 UTC Remote: 2024-08-15 23:21:49.976091477 +0000 UTC m=+70.877357108 (delta=94.472673ms)
	I0815 23:21:50.094973   30687 fix.go:200] guest clock delta is within tolerance: 94.472673ms
	I0815 23:21:50.094977   30687 start.go:83] releasing machines lock for "ha-175414-m02", held for 23.770294763s
	I0815 23:21:50.094997   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:50.095269   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:21:50.098173   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.098492   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:50.098525   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.100852   30687 out.go:177] * Found network options:
	I0815 23:21:50.102111   30687 out.go:177]   - NO_PROXY=192.168.39.67
	W0815 23:21:50.103403   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 23:21:50.103430   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:50.103942   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:50.104122   30687 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:21:50.104191   30687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:21:50.104226   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	W0815 23:21:50.104292   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 23:21:50.104375   30687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:21:50.104398   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:21:50.106666   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.107067   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.107136   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:50.107161   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.107308   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:50.107452   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:50.107541   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:50.107559   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:50.107604   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:50.107736   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:21:50.107805   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:21:50.107873   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:21:50.108004   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:21:50.108123   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:21:50.345058   30687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:21:50.351133   30687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:21:50.351196   30687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:21:50.369303   30687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 23:21:50.369336   30687 start.go:495] detecting cgroup driver to use...
	I0815 23:21:50.369407   30687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:21:50.387197   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:21:50.402389   30687 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:21:50.402456   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:21:50.417085   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:21:50.431734   30687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:21:50.561170   30687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:21:50.705184   30687 docker.go:233] disabling docker service ...
	I0815 23:21:50.705263   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:21:50.720438   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:21:50.733936   30687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:21:50.870015   30687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:21:50.993385   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:21:51.007433   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:21:51.026558   30687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:21:51.026622   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.041656   30687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:21:51.041714   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.053502   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.064265   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.074719   30687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:21:51.085392   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.095900   30687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.113421   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:21:51.123579   30687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:21:51.132769   30687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 23:21:51.132816   30687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 23:21:51.145489   30687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:21:51.154882   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:21:51.271396   30687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:21:51.410590   30687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:21:51.410664   30687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:21:51.415423   30687 start.go:563] Will wait 60s for crictl version
	I0815 23:21:51.415488   30687 ssh_runner.go:195] Run: which crictl
	I0815 23:21:51.419517   30687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:21:51.458295   30687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:21:51.458387   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:21:51.487145   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:21:51.517696   30687 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:21:51.519076   30687 out.go:177]   - env NO_PROXY=192.168.39.67
	I0815 23:21:51.520269   30687 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:21:51.522990   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:51.523318   30687 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:21:41 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:21:51.523340   30687 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:21:51.523524   30687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:21:51.527692   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:21:51.541082   30687 mustload.go:65] Loading cluster: ha-175414
	I0815 23:21:51.541287   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:21:51.541578   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:51.541605   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:51.556079   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38217
	I0815 23:21:51.556570   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:51.557063   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:51.557087   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:51.557352   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:51.557505   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:21:51.559104   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:21:51.559371   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:51.559392   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:51.573419   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39849
	I0815 23:21:51.573768   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:51.574205   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:51.574232   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:51.574569   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:51.574765   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:51.574918   30687 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414 for IP: 192.168.39.19
	I0815 23:21:51.574929   30687 certs.go:194] generating shared ca certs ...
	I0815 23:21:51.574946   30687 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:51.575085   30687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:21:51.575137   30687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:21:51.575151   30687 certs.go:256] generating profile certs ...
	I0815 23:21:51.575233   30687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key
	I0815 23:21:51.575263   30687 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.28369cf2
	I0815 23:21:51.575284   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.28369cf2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.19 192.168.39.254]
	I0815 23:21:51.864708   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.28369cf2 ...
	I0815 23:21:51.864746   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.28369cf2: {Name:mk1af29fefa6fcd050dd679013330c0736cb81cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:51.864941   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.28369cf2 ...
	I0815 23:21:51.864958   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.28369cf2: {Name:mk711f6a080c33c4577e6174099e0ff15fdd0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:21:51.865064   30687 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.28369cf2 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt
	I0815 23:21:51.865191   30687 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.28369cf2 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key
	I0815 23:21:51.865310   30687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key
	I0815 23:21:51.865325   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:21:51.865337   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:21:51.865350   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:21:51.865363   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:21:51.865375   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:21:51.865387   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:21:51.865399   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:21:51.865410   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:21:51.865459   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:21:51.865485   30687 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:21:51.865495   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:21:51.865518   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:21:51.865540   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:21:51.865562   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:21:51.865597   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:21:51.865621   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:21:51.865636   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:51.865648   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:21:51.865677   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:51.868573   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:51.868973   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:51.869003   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:51.869152   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:51.869347   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:51.869497   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:51.869644   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:51.942252   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 23:21:51.947249   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 23:21:51.959431   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 23:21:51.964431   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 23:21:51.975814   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 23:21:51.980044   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 23:21:51.993528   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 23:21:51.998728   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 23:21:52.013690   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 23:21:52.024012   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 23:21:52.037471   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 23:21:52.043042   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 23:21:52.059824   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:21:52.085017   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:21:52.108644   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:21:52.133288   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:21:52.157875   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 23:21:52.183550   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 23:21:52.208624   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:21:52.233447   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:21:52.258959   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:21:52.283917   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:21:52.307721   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:21:52.331804   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 23:21:52.349321   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 23:21:52.366528   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 23:21:52.384149   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 23:21:52.401130   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 23:21:52.417900   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 23:21:52.434872   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 23:21:52.451833   30687 ssh_runner.go:195] Run: openssl version
	I0815 23:21:52.457773   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:21:52.469770   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:52.474461   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:52.474509   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:21:52.480338   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:21:52.491586   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:21:52.503378   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:21:52.507973   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:21:52.508041   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:21:52.513863   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:21:52.525112   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:21:52.536745   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:21:52.541326   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:21:52.541381   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:21:52.547130   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:21:52.559218   30687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:21:52.563283   30687 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 23:21:52.563329   30687 kubeadm.go:934] updating node {m02 192.168.39.19 8443 v1.31.0 crio true true} ...
	I0815 23:21:52.563410   30687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-175414-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:21:52.563437   30687 kube-vip.go:115] generating kube-vip config ...
	I0815 23:21:52.563476   30687 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 23:21:52.579719   30687 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 23:21:52.579804   30687 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 23:21:52.579861   30687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:21:52.590185   30687 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 23:21:52.590239   30687 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 23:21:52.600604   30687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 23:21:52.600629   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 23:21:52.600695   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 23:21:52.600754   30687 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0815 23:21:52.600789   30687 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0815 23:21:52.605051   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 23:21:52.605083   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 23:21:53.213833   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 23:21:53.213960   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 23:21:53.219202   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 23:21:53.219244   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 23:21:53.283041   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:21:53.326596   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 23:21:53.326693   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 23:21:53.332854   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 23:21:53.332893   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 23:21:53.764470   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 23:21:53.774767   30687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0815 23:21:53.791984   30687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:21:53.809533   30687 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 23:21:53.827678   30687 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 23:21:53.831629   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:21:53.844814   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:21:53.967941   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:21:53.986117   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:21:53.986517   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:21:53.986556   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:21:54.001404   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0815 23:21:54.001924   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:21:54.002379   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:21:54.002401   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:21:54.002737   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:21:54.002924   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:21:54.003063   30687 start.go:317] joinCluster: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:21:54.003176   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 23:21:54.003191   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:21:54.006151   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:54.006602   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:21:54.006629   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:21:54.006994   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:21:54.007187   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:21:54.007315   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:21:54.007517   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:21:54.158764   30687 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:21:54.158802   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ndxr0c.wkjp0rvuu46mh8r8 --discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-175414-m02 --control-plane --apiserver-advertise-address=192.168.39.19 --apiserver-bind-port=8443"
	I0815 23:22:15.902670   30687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ndxr0c.wkjp0rvuu46mh8r8 --discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-175414-m02 --control-plane --apiserver-advertise-address=192.168.39.19 --apiserver-bind-port=8443": (21.743838525s)
	I0815 23:22:15.902704   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 23:22:16.457267   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-175414-m02 minikube.k8s.io/updated_at=2024_08_15T23_22_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=ha-175414 minikube.k8s.io/primary=false
	I0815 23:22:16.582820   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-175414-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 23:22:16.711508   30687 start.go:319] duration metric: took 22.708440464s to joinCluster
	I0815 23:22:16.711580   30687 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:22:16.711873   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:22:16.713906   30687 out.go:177] * Verifying Kubernetes components...
	I0815 23:22:16.715266   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:22:16.914959   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:22:16.930821   30687 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:22:16.931180   30687 kapi.go:59] client config for ha-175414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key", CAFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 23:22:16.931267   30687 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I0815 23:22:16.931575   30687 node_ready.go:35] waiting up to 6m0s for node "ha-175414-m02" to be "Ready" ...
	I0815 23:22:16.931718   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:16.931731   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:16.931741   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:16.931748   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:16.945769   30687 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0815 23:22:17.432660   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:17.432681   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:17.432688   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:17.432693   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:17.438224   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:17.931830   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:17.931856   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:17.931867   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:17.931872   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:17.934928   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:18.431815   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:18.431841   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:18.431850   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:18.431858   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:18.435978   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:18.932756   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:18.932779   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:18.932789   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:18.932794   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:18.944563   30687 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0815 23:22:18.945291   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:19.431861   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:19.431881   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:19.431889   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:19.431893   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:19.434996   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:19.932054   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:19.932081   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:19.932092   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:19.932099   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:19.935557   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:20.432316   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:20.432342   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:20.432354   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:20.432359   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:20.436797   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:20.932722   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:20.932746   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:20.932762   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:20.932766   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:21.020664   30687 round_trippers.go:574] Response Status: 200 OK in 87 milliseconds
	I0815 23:22:21.021343   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:21.431996   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:21.432022   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:21.432032   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:21.432039   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:21.434662   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:21.932425   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:21.932451   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:21.932462   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:21.932467   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:21.939310   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:22:22.432822   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:22.432851   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:22.432863   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:22.432869   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:22.438244   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:22.932102   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:22.932126   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:22.932135   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:22.932140   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:22.935256   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:23.432155   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:23.432176   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:23.432186   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:23.432192   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:23.435979   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:23.436385   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:23.931786   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:23.931808   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:23.931824   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:23.931829   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:23.935115   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:24.431884   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:24.431905   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:24.431912   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:24.431916   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:24.434787   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:24.932306   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:24.932329   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:24.932337   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:24.932342   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:24.935665   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:25.431909   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:25.431927   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:25.431935   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:25.431940   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:25.435398   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:25.931790   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:25.931809   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:25.931817   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:25.931820   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:25.937748   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:25.938566   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:26.431951   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:26.431978   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:26.431989   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:26.431996   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:26.435337   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:26.931971   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:26.931994   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:26.932002   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:26.932006   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:26.935332   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:27.432613   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:27.432637   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:27.432645   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:27.432650   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:27.435773   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:27.931800   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:27.931822   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:27.931830   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:27.931834   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:27.935039   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:28.431819   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:28.431840   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:28.431848   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:28.431851   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:28.434566   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:28.435017   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:28.932436   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:28.932458   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:28.932466   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:28.932471   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:28.936039   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:29.432176   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:29.432201   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:29.432211   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:29.432216   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:29.435339   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:29.932422   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:29.932449   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:29.932460   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:29.932465   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:29.936328   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:30.432389   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:30.432410   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:30.432417   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:30.432420   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:30.435406   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:30.436249   30687 node_ready.go:53] node "ha-175414-m02" has status "Ready":"False"
	I0815 23:22:30.932709   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:30.932731   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:30.932738   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:30.932742   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:30.936181   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:31.432570   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:31.432590   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:31.432597   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:31.432601   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:31.436728   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:31.932741   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:31.932770   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:31.932780   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:31.932784   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:31.935947   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:32.431901   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:32.431924   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.431932   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.431935   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.435310   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:32.932661   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:32.932680   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.932689   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.932693   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.946799   30687 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0815 23:22:32.947786   30687 node_ready.go:49] node "ha-175414-m02" has status "Ready":"True"
	I0815 23:22:32.947805   30687 node_ready.go:38] duration metric: took 16.016188008s for node "ha-175414-m02" to be "Ready" ...
	I0815 23:22:32.947812   30687 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:22:32.947881   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:32.947892   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.947901   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.947911   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.967321   30687 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0815 23:22:32.974011   30687 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:32.974102   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vkm5s
	I0815 23:22:32.974112   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.974119   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.974122   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.983281   30687 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0815 23:22:32.983883   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:32.983900   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.983907   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.983912   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.989285   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:32.989816   30687 pod_ready.go:93] pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:32.989837   30687 pod_ready.go:82] duration metric: took 15.801916ms for pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:32.989861   30687 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:32.989934   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zrv4c
	I0815 23:22:32.989951   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.989962   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.989970   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:32.996009   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:22:32.996645   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:32.996659   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:32.996667   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:32.996683   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.007664   30687 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0815 23:22:33.008216   30687 pod_ready.go:93] pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.008234   30687 pod_ready.go:82] duration metric: took 18.36539ms for pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.008245   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.008313   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414
	I0815 23:22:33.008325   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.008335   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.008344   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.013455   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:22:33.014148   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:33.014183   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.014193   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.014200   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.018177   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:33.019329   30687 pod_ready.go:93] pod "etcd-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.019357   30687 pod_ready.go:82] duration metric: took 11.103182ms for pod "etcd-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.019374   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.019450   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414-m02
	I0815 23:22:33.019457   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.019467   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.019473   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.023241   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:33.023952   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:33.023968   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.023976   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.023980   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.026457   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:22:33.026972   30687 pod_ready.go:93] pod "etcd-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.027001   30687 pod_ready.go:82] duration metric: took 7.618346ms for pod "etcd-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.027017   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.133348   30687 request.go:632] Waited for 106.269823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414
	I0815 23:22:33.133406   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414
	I0815 23:22:33.133411   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.133418   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.133422   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.137569   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:33.332905   30687 request.go:632] Waited for 194.308147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:33.332973   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:33.332981   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.332993   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.332998   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.336570   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:33.337178   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.337210   30687 pod_ready.go:82] duration metric: took 310.183521ms for pod "kube-apiserver-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.337235   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.533459   30687 request.go:632] Waited for 196.156079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m02
	I0815 23:22:33.533539   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m02
	I0815 23:22:33.533548   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.533561   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.533569   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.537905   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:33.733038   30687 request.go:632] Waited for 194.38316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:33.733106   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:33.733114   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.733122   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.733130   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.736641   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:33.737513   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:33.737531   30687 pod_ready.go:82] duration metric: took 400.289414ms for pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.737540   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:33.933661   30687 request.go:632] Waited for 196.059272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414
	I0815 23:22:33.933720   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414
	I0815 23:22:33.933725   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:33.933731   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:33.933735   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:33.937028   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:34.133387   30687 request.go:632] Waited for 195.365027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:34.133433   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:34.133438   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.133445   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.133448   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.136475   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:34.136976   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:34.136995   30687 pod_ready.go:82] duration metric: took 399.448393ms for pod "kube-controller-manager-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.137005   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.333497   30687 request.go:632] Waited for 196.419328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m02
	I0815 23:22:34.333551   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m02
	I0815 23:22:34.333557   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.333564   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.333568   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.337210   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:34.533398   30687 request.go:632] Waited for 195.343135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:34.533449   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:34.533454   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.533461   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.533466   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.537225   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:34.537735   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:34.537753   30687 pod_ready.go:82] duration metric: took 400.740862ms for pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.537762   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4frcn" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.732804   30687 request.go:632] Waited for 194.975905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frcn
	I0815 23:22:34.732879   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frcn
	I0815 23:22:34.732884   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.732892   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.732896   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.737205   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:34.933417   30687 request.go:632] Waited for 195.403534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:34.933478   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:34.933484   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:34.933491   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:34.933496   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:34.938383   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:34.938857   30687 pod_ready.go:93] pod "kube-proxy-4frcn" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:34.938875   30687 pod_ready.go:82] duration metric: took 401.107272ms for pod "kube-proxy-4frcn" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:34.938884   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dcnmc" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.133103   30687 request.go:632] Waited for 194.151951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcnmc
	I0815 23:22:35.133181   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcnmc
	I0815 23:22:35.133186   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.133194   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.133197   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.137367   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:35.333458   30687 request.go:632] Waited for 195.373306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:35.333528   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:35.333534   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.333541   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.333545   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.337654   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:35.338615   30687 pod_ready.go:93] pod "kube-proxy-dcnmc" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:35.338633   30687 pod_ready.go:82] duration metric: took 399.743347ms for pod "kube-proxy-dcnmc" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.338640   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.532934   30687 request.go:632] Waited for 194.214051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414
	I0815 23:22:35.532997   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414
	I0815 23:22:35.533003   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.533011   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.533014   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.537133   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:35.733165   30687 request.go:632] Waited for 195.403533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:35.733240   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:22:35.733249   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.733260   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.733273   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.736841   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:35.737423   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:35.737439   30687 pod_ready.go:82] duration metric: took 398.792945ms for pod "kube-scheduler-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.737448   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:35.933530   30687 request.go:632] Waited for 196.011478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m02
	I0815 23:22:35.933610   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m02
	I0815 23:22:35.933616   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:35.933623   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:35.933628   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:35.936841   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:36.132797   30687 request.go:632] Waited for 195.30673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:36.132870   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:22:36.132877   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.132887   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.132892   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.136147   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:36.136724   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:22:36.136748   30687 pod_ready.go:82] duration metric: took 399.292336ms for pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:22:36.136759   30687 pod_ready.go:39] duration metric: took 3.188935798s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:22:36.136774   30687 api_server.go:52] waiting for apiserver process to appear ...
	I0815 23:22:36.136822   30687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:22:36.152417   30687 api_server.go:72] duration metric: took 19.440801659s to wait for apiserver process to appear ...
	I0815 23:22:36.152446   30687 api_server.go:88] waiting for apiserver healthz status ...
	I0815 23:22:36.152469   30687 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0815 23:22:36.157227   30687 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0815 23:22:36.157300   30687 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I0815 23:22:36.157311   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.157322   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.157327   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.158258   30687 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 23:22:36.158464   30687 api_server.go:141] control plane version: v1.31.0
	I0815 23:22:36.158490   30687 api_server.go:131] duration metric: took 6.036229ms to wait for apiserver health ...
	I0815 23:22:36.158499   30687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 23:22:36.332764   30687 request.go:632] Waited for 174.201426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:36.332854   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:36.332863   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.332875   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.332886   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.340757   30687 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 23:22:36.345753   30687 system_pods.go:59] 17 kube-system pods found
	I0815 23:22:36.345792   30687 system_pods.go:61] "coredns-6f6b679f8f-vkm5s" [1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c] Running
	I0815 23:22:36.345799   30687 system_pods.go:61] "coredns-6f6b679f8f-zrv4c" [97d399d0-871e-4e59-8c4d-093b5a29a107] Running
	I0815 23:22:36.345805   30687 system_pods.go:61] "etcd-ha-175414" [8358595a-b7fc-40b0-b3a1-8bce46f618dd] Running
	I0815 23:22:36.345812   30687 system_pods.go:61] "etcd-ha-175414-m02" [fd9e81e9-bfd2-4040-9425-06a84b9c3dda] Running
	I0815 23:22:36.345817   30687 system_pods.go:61] "kindnet-47nts" [969ed4f0-c372-4d22-ba84-cfcd5774f1cf] Running
	I0815 23:22:36.345825   30687 system_pods.go:61] "kindnet-jjcdm" [534a226d-c0b6-4a2f-8b2c-27921c9e1aca] Running
	I0815 23:22:36.345833   30687 system_pods.go:61] "kube-apiserver-ha-175414" [74c0c52d-72f6-425e-ba1e-047ebb890ed4] Running
	I0815 23:22:36.345854   30687 system_pods.go:61] "kube-apiserver-ha-175414-m02" [019a6c53-1d80-40a3-93ea-6179c12e17ed] Running
	I0815 23:22:36.345864   30687 system_pods.go:61] "kube-controller-manager-ha-175414" [88aeb420-f593-4e18-8149-6fe48fd85b7d] Running
	I0815 23:22:36.345871   30687 system_pods.go:61] "kube-controller-manager-ha-175414-m02" [be3e762b-556f-4881-9a29-c9a867ccb5e7] Running
	I0815 23:22:36.345878   30687 system_pods.go:61] "kube-proxy-4frcn" [2831334a-a379-4f6d-ada3-53a01fc6f65e] Running
	I0815 23:22:36.345884   30687 system_pods.go:61] "kube-proxy-dcnmc" [572a1e80-23b0-4cb9-bfab-067b6853226d] Running
	I0815 23:22:36.345892   30687 system_pods.go:61] "kube-scheduler-ha-175414" [7463fcbb-2a5f-4101-8b25-f72c74ca515a] Running
	I0815 23:22:36.345898   30687 system_pods.go:61] "kube-scheduler-ha-175414-m02" [1e5715dc-154a-4669-8a4e-986bb989a16b] Running
	I0815 23:22:36.345908   30687 system_pods.go:61] "kube-vip-ha-175414" [6b98571e-8ad5-45e0-acbc-d0e875647a69] Running
	I0815 23:22:36.345914   30687 system_pods.go:61] "kube-vip-ha-175414-m02" [4877d97c-4adb-4ce8-813f-0819e8a96b5a] Running
	I0815 23:22:36.345920   30687 system_pods.go:61] "storage-provisioner" [7042d764-6043-449c-a1e9-aaa28256c579] Running
	I0815 23:22:36.345928   30687 system_pods.go:74] duration metric: took 187.421636ms to wait for pod list to return data ...
	I0815 23:22:36.345940   30687 default_sa.go:34] waiting for default service account to be created ...
	I0815 23:22:36.533732   30687 request.go:632] Waited for 187.721428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I0815 23:22:36.533801   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I0815 23:22:36.533813   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.533824   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.533831   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.537689   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:36.537926   30687 default_sa.go:45] found service account: "default"
	I0815 23:22:36.537946   30687 default_sa.go:55] duration metric: took 191.997547ms for default service account to be created ...
	I0815 23:22:36.537953   30687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 23:22:36.732805   30687 request.go:632] Waited for 194.768976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:36.732891   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:22:36.732902   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.732914   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.732924   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.737657   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:22:36.742379   30687 system_pods.go:86] 17 kube-system pods found
	I0815 23:22:36.742407   30687 system_pods.go:89] "coredns-6f6b679f8f-vkm5s" [1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c] Running
	I0815 23:22:36.742414   30687 system_pods.go:89] "coredns-6f6b679f8f-zrv4c" [97d399d0-871e-4e59-8c4d-093b5a29a107] Running
	I0815 23:22:36.742420   30687 system_pods.go:89] "etcd-ha-175414" [8358595a-b7fc-40b0-b3a1-8bce46f618dd] Running
	I0815 23:22:36.742425   30687 system_pods.go:89] "etcd-ha-175414-m02" [fd9e81e9-bfd2-4040-9425-06a84b9c3dda] Running
	I0815 23:22:36.742429   30687 system_pods.go:89] "kindnet-47nts" [969ed4f0-c372-4d22-ba84-cfcd5774f1cf] Running
	I0815 23:22:36.742435   30687 system_pods.go:89] "kindnet-jjcdm" [534a226d-c0b6-4a2f-8b2c-27921c9e1aca] Running
	I0815 23:22:36.742441   30687 system_pods.go:89] "kube-apiserver-ha-175414" [74c0c52d-72f6-425e-ba1e-047ebb890ed4] Running
	I0815 23:22:36.742446   30687 system_pods.go:89] "kube-apiserver-ha-175414-m02" [019a6c53-1d80-40a3-93ea-6179c12e17ed] Running
	I0815 23:22:36.742452   30687 system_pods.go:89] "kube-controller-manager-ha-175414" [88aeb420-f593-4e18-8149-6fe48fd85b7d] Running
	I0815 23:22:36.742461   30687 system_pods.go:89] "kube-controller-manager-ha-175414-m02" [be3e762b-556f-4881-9a29-c9a867ccb5e7] Running
	I0815 23:22:36.742469   30687 system_pods.go:89] "kube-proxy-4frcn" [2831334a-a379-4f6d-ada3-53a01fc6f65e] Running
	I0815 23:22:36.742476   30687 system_pods.go:89] "kube-proxy-dcnmc" [572a1e80-23b0-4cb9-bfab-067b6853226d] Running
	I0815 23:22:36.742485   30687 system_pods.go:89] "kube-scheduler-ha-175414" [7463fcbb-2a5f-4101-8b25-f72c74ca515a] Running
	I0815 23:22:36.742494   30687 system_pods.go:89] "kube-scheduler-ha-175414-m02" [1e5715dc-154a-4669-8a4e-986bb989a16b] Running
	I0815 23:22:36.742502   30687 system_pods.go:89] "kube-vip-ha-175414" [6b98571e-8ad5-45e0-acbc-d0e875647a69] Running
	I0815 23:22:36.742507   30687 system_pods.go:89] "kube-vip-ha-175414-m02" [4877d97c-4adb-4ce8-813f-0819e8a96b5a] Running
	I0815 23:22:36.742512   30687 system_pods.go:89] "storage-provisioner" [7042d764-6043-449c-a1e9-aaa28256c579] Running
	I0815 23:22:36.742521   30687 system_pods.go:126] duration metric: took 204.56185ms to wait for k8s-apps to be running ...
	I0815 23:22:36.742534   30687 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 23:22:36.742585   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:22:36.757271   30687 system_svc.go:56] duration metric: took 14.728453ms WaitForService to wait for kubelet
	I0815 23:22:36.757305   30687 kubeadm.go:582] duration metric: took 20.045692436s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:22:36.757327   30687 node_conditions.go:102] verifying NodePressure condition ...
	I0815 23:22:36.932664   30687 request.go:632] Waited for 175.26732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I0815 23:22:36.932737   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I0815 23:22:36.932748   30687 round_trippers.go:469] Request Headers:
	I0815 23:22:36.932757   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:22:36.932761   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:22:36.936589   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:22:36.937308   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:22:36.937326   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:22:36.937343   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:22:36.937347   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:22:36.937351   30687 node_conditions.go:105] duration metric: took 180.019245ms to run NodePressure ...
	I0815 23:22:36.937361   30687 start.go:241] waiting for startup goroutines ...
	I0815 23:22:36.937383   30687 start.go:255] writing updated cluster config ...
	I0815 23:22:36.939442   30687 out.go:201] 
	I0815 23:22:36.941076   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:22:36.941201   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:22:36.942865   30687 out.go:177] * Starting "ha-175414-m03" control-plane node in "ha-175414" cluster
	I0815 23:22:36.943930   30687 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:22:36.943954   30687 cache.go:56] Caching tarball of preloaded images
	I0815 23:22:36.944051   30687 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:22:36.944060   30687 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:22:36.944141   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:22:36.944300   30687 start.go:360] acquireMachinesLock for ha-175414-m03: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:22:36.944341   30687 start.go:364] duration metric: took 23.052µs to acquireMachinesLock for "ha-175414-m03"
	I0815 23:22:36.944363   30687 start.go:93] Provisioning new machine with config: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:22:36.944456   30687 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0815 23:22:36.945756   30687 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 23:22:36.945839   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:22:36.945883   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:22:36.960464   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0815 23:22:36.960920   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:22:36.961411   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:22:36.961433   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:22:36.961899   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:22:36.962094   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetMachineName
	I0815 23:22:36.962257   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:22:36.962422   30687 start.go:159] libmachine.API.Create for "ha-175414" (driver="kvm2")
	I0815 23:22:36.962449   30687 client.go:168] LocalClient.Create starting
	I0815 23:22:36.962484   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0815 23:22:36.962527   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:22:36.962545   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:22:36.962607   30687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0815 23:22:36.962633   30687 main.go:141] libmachine: Decoding PEM data...
	I0815 23:22:36.962649   30687 main.go:141] libmachine: Parsing certificate...
	I0815 23:22:36.962675   30687 main.go:141] libmachine: Running pre-create checks...
	I0815 23:22:36.962686   30687 main.go:141] libmachine: (ha-175414-m03) Calling .PreCreateCheck
	I0815 23:22:36.962859   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetConfigRaw
	I0815 23:22:36.963190   30687 main.go:141] libmachine: Creating machine...
	I0815 23:22:36.963202   30687 main.go:141] libmachine: (ha-175414-m03) Calling .Create
	I0815 23:22:36.963324   30687 main.go:141] libmachine: (ha-175414-m03) Creating KVM machine...
	I0815 23:22:36.964577   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found existing default KVM network
	I0815 23:22:36.964715   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found existing private KVM network mk-ha-175414
	I0815 23:22:36.964846   30687 main.go:141] libmachine: (ha-175414-m03) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03 ...
	I0815 23:22:36.964867   30687 main.go:141] libmachine: (ha-175414-m03) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:22:36.964919   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:36.964843   31431 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:22:36.965050   30687 main.go:141] libmachine: (ha-175414-m03) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0815 23:22:37.192864   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:37.192752   31431 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa...
	I0815 23:22:37.272364   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:37.272256   31431 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/ha-175414-m03.rawdisk...
	I0815 23:22:37.272395   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Writing magic tar header
	I0815 23:22:37.272406   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Writing SSH key tar header
	I0815 23:22:37.272418   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:37.272367   31431 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03 ...
	I0815 23:22:37.272452   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03
	I0815 23:22:37.272466   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03 (perms=drwx------)
	I0815 23:22:37.272545   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0815 23:22:37.272569   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0815 23:22:37.272579   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:22:37.272594   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0815 23:22:37.272606   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 23:22:37.272617   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home/jenkins
	I0815 23:22:37.272625   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Checking permissions on dir: /home
	I0815 23:22:37.272638   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Skipping /home - not owner
	I0815 23:22:37.272674   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0815 23:22:37.272697   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0815 23:22:37.272720   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 23:22:37.272734   30687 main.go:141] libmachine: (ha-175414-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 23:22:37.272747   30687 main.go:141] libmachine: (ha-175414-m03) Creating domain...
	I0815 23:22:37.273666   30687 main.go:141] libmachine: (ha-175414-m03) define libvirt domain using xml: 
	I0815 23:22:37.273681   30687 main.go:141] libmachine: (ha-175414-m03) <domain type='kvm'>
	I0815 23:22:37.273689   30687 main.go:141] libmachine: (ha-175414-m03)   <name>ha-175414-m03</name>
	I0815 23:22:37.273694   30687 main.go:141] libmachine: (ha-175414-m03)   <memory unit='MiB'>2200</memory>
	I0815 23:22:37.273700   30687 main.go:141] libmachine: (ha-175414-m03)   <vcpu>2</vcpu>
	I0815 23:22:37.273705   30687 main.go:141] libmachine: (ha-175414-m03)   <features>
	I0815 23:22:37.273713   30687 main.go:141] libmachine: (ha-175414-m03)     <acpi/>
	I0815 23:22:37.273724   30687 main.go:141] libmachine: (ha-175414-m03)     <apic/>
	I0815 23:22:37.273732   30687 main.go:141] libmachine: (ha-175414-m03)     <pae/>
	I0815 23:22:37.273741   30687 main.go:141] libmachine: (ha-175414-m03)     
	I0815 23:22:37.273782   30687 main.go:141] libmachine: (ha-175414-m03)   </features>
	I0815 23:22:37.273808   30687 main.go:141] libmachine: (ha-175414-m03)   <cpu mode='host-passthrough'>
	I0815 23:22:37.273818   30687 main.go:141] libmachine: (ha-175414-m03)   
	I0815 23:22:37.273832   30687 main.go:141] libmachine: (ha-175414-m03)   </cpu>
	I0815 23:22:37.273856   30687 main.go:141] libmachine: (ha-175414-m03)   <os>
	I0815 23:22:37.273868   30687 main.go:141] libmachine: (ha-175414-m03)     <type>hvm</type>
	I0815 23:22:37.273880   30687 main.go:141] libmachine: (ha-175414-m03)     <boot dev='cdrom'/>
	I0815 23:22:37.273886   30687 main.go:141] libmachine: (ha-175414-m03)     <boot dev='hd'/>
	I0815 23:22:37.273901   30687 main.go:141] libmachine: (ha-175414-m03)     <bootmenu enable='no'/>
	I0815 23:22:37.273910   30687 main.go:141] libmachine: (ha-175414-m03)   </os>
	I0815 23:22:37.273923   30687 main.go:141] libmachine: (ha-175414-m03)   <devices>
	I0815 23:22:37.273934   30687 main.go:141] libmachine: (ha-175414-m03)     <disk type='file' device='cdrom'>
	I0815 23:22:37.273955   30687 main.go:141] libmachine: (ha-175414-m03)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/boot2docker.iso'/>
	I0815 23:22:37.273978   30687 main.go:141] libmachine: (ha-175414-m03)       <target dev='hdc' bus='scsi'/>
	I0815 23:22:37.273995   30687 main.go:141] libmachine: (ha-175414-m03)       <readonly/>
	I0815 23:22:37.274004   30687 main.go:141] libmachine: (ha-175414-m03)     </disk>
	I0815 23:22:37.274015   30687 main.go:141] libmachine: (ha-175414-m03)     <disk type='file' device='disk'>
	I0815 23:22:37.274035   30687 main.go:141] libmachine: (ha-175414-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 23:22:37.274050   30687 main.go:141] libmachine: (ha-175414-m03)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/ha-175414-m03.rawdisk'/>
	I0815 23:22:37.274063   30687 main.go:141] libmachine: (ha-175414-m03)       <target dev='hda' bus='virtio'/>
	I0815 23:22:37.274073   30687 main.go:141] libmachine: (ha-175414-m03)     </disk>
	I0815 23:22:37.274082   30687 main.go:141] libmachine: (ha-175414-m03)     <interface type='network'>
	I0815 23:22:37.274095   30687 main.go:141] libmachine: (ha-175414-m03)       <source network='mk-ha-175414'/>
	I0815 23:22:37.274118   30687 main.go:141] libmachine: (ha-175414-m03)       <model type='virtio'/>
	I0815 23:22:37.274137   30687 main.go:141] libmachine: (ha-175414-m03)     </interface>
	I0815 23:22:37.274151   30687 main.go:141] libmachine: (ha-175414-m03)     <interface type='network'>
	I0815 23:22:37.274164   30687 main.go:141] libmachine: (ha-175414-m03)       <source network='default'/>
	I0815 23:22:37.274177   30687 main.go:141] libmachine: (ha-175414-m03)       <model type='virtio'/>
	I0815 23:22:37.274187   30687 main.go:141] libmachine: (ha-175414-m03)     </interface>
	I0815 23:22:37.274199   30687 main.go:141] libmachine: (ha-175414-m03)     <serial type='pty'>
	I0815 23:22:37.274214   30687 main.go:141] libmachine: (ha-175414-m03)       <target port='0'/>
	I0815 23:22:37.274226   30687 main.go:141] libmachine: (ha-175414-m03)     </serial>
	I0815 23:22:37.274237   30687 main.go:141] libmachine: (ha-175414-m03)     <console type='pty'>
	I0815 23:22:37.274251   30687 main.go:141] libmachine: (ha-175414-m03)       <target type='serial' port='0'/>
	I0815 23:22:37.274262   30687 main.go:141] libmachine: (ha-175414-m03)     </console>
	I0815 23:22:37.274275   30687 main.go:141] libmachine: (ha-175414-m03)     <rng model='virtio'>
	I0815 23:22:37.274292   30687 main.go:141] libmachine: (ha-175414-m03)       <backend model='random'>/dev/random</backend>
	I0815 23:22:37.274304   30687 main.go:141] libmachine: (ha-175414-m03)     </rng>
	I0815 23:22:37.274313   30687 main.go:141] libmachine: (ha-175414-m03)     
	I0815 23:22:37.274322   30687 main.go:141] libmachine: (ha-175414-m03)     
	I0815 23:22:37.274333   30687 main.go:141] libmachine: (ha-175414-m03)   </devices>
	I0815 23:22:37.274345   30687 main.go:141] libmachine: (ha-175414-m03) </domain>
	I0815 23:22:37.274353   30687 main.go:141] libmachine: (ha-175414-m03) 
	I0815 23:22:37.280800   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:73:cb:49 in network default
	I0815 23:22:37.281372   30687 main.go:141] libmachine: (ha-175414-m03) Ensuring networks are active...
	I0815 23:22:37.281388   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:37.282151   30687 main.go:141] libmachine: (ha-175414-m03) Ensuring network default is active
	I0815 23:22:37.282496   30687 main.go:141] libmachine: (ha-175414-m03) Ensuring network mk-ha-175414 is active
	I0815 23:22:37.282842   30687 main.go:141] libmachine: (ha-175414-m03) Getting domain xml...
	I0815 23:22:37.283465   30687 main.go:141] libmachine: (ha-175414-m03) Creating domain...
	I0815 23:22:38.526952   30687 main.go:141] libmachine: (ha-175414-m03) Waiting to get IP...
	I0815 23:22:38.527758   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:38.528177   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:38.528204   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:38.528142   31431 retry.go:31] will retry after 239.145725ms: waiting for machine to come up
	I0815 23:22:38.768565   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:38.768982   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:38.769008   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:38.768952   31431 retry.go:31] will retry after 385.356446ms: waiting for machine to come up
	I0815 23:22:39.155461   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:39.155832   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:39.155864   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:39.155789   31431 retry.go:31] will retry after 312.62161ms: waiting for machine to come up
	I0815 23:22:39.470250   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:39.470675   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:39.470697   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:39.470643   31431 retry.go:31] will retry after 444.229589ms: waiting for machine to come up
	I0815 23:22:39.916243   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:39.916587   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:39.916613   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:39.916557   31431 retry.go:31] will retry after 620.629364ms: waiting for machine to come up
	I0815 23:22:40.539215   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:40.539587   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:40.539610   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:40.539557   31431 retry.go:31] will retry after 797.102726ms: waiting for machine to come up
	I0815 23:22:41.338452   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:41.338872   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:41.338903   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:41.338819   31431 retry.go:31] will retry after 759.026392ms: waiting for machine to come up
	I0815 23:22:42.099393   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:42.099813   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:42.099868   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:42.099797   31431 retry.go:31] will retry after 1.405444372s: waiting for machine to come up
	I0815 23:22:43.506843   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:43.507282   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:43.507304   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:43.507235   31431 retry.go:31] will retry after 1.309943276s: waiting for machine to come up
	I0815 23:22:44.818216   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:44.818664   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:44.818687   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:44.818630   31431 retry.go:31] will retry after 1.907729069s: waiting for machine to come up
	I0815 23:22:46.728655   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:46.729071   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:46.729096   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:46.729019   31431 retry.go:31] will retry after 1.767034123s: waiting for machine to come up
	I0815 23:22:48.497136   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:48.497534   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:48.497563   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:48.497498   31431 retry.go:31] will retry after 2.658746356s: waiting for machine to come up
	I0815 23:22:51.158963   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:51.159423   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:51.159449   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:51.159378   31431 retry.go:31] will retry after 4.113519624s: waiting for machine to come up
	I0815 23:22:55.274770   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:55.275134   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find current IP address of domain ha-175414-m03 in network mk-ha-175414
	I0815 23:22:55.275156   30687 main.go:141] libmachine: (ha-175414-m03) DBG | I0815 23:22:55.275094   31431 retry.go:31] will retry after 3.634365209s: waiting for machine to come up
	I0815 23:22:58.910902   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:58.911318   30687 main.go:141] libmachine: (ha-175414-m03) Found IP for machine: 192.168.39.100
	I0815 23:22:58.911351   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has current primary IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:58.911360   30687 main.go:141] libmachine: (ha-175414-m03) Reserving static IP address...
	I0815 23:22:58.911771   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find host DHCP lease matching {name: "ha-175414-m03", mac: "52:54:00:bc:81:69", ip: "192.168.39.100"} in network mk-ha-175414
	I0815 23:22:58.984399   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Getting to WaitForSSH function...
	I0815 23:22:58.984424   30687 main.go:141] libmachine: (ha-175414-m03) Reserved static IP address: 192.168.39.100
	I0815 23:22:58.984434   30687 main.go:141] libmachine: (ha-175414-m03) Waiting for SSH to be available...
	I0815 23:22:58.987083   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:22:58.987453   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414
	I0815 23:22:58.987483   30687 main.go:141] libmachine: (ha-175414-m03) DBG | unable to find defined IP address of network mk-ha-175414 interface with MAC address 52:54:00:bc:81:69
	I0815 23:22:58.987587   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using SSH client type: external
	I0815 23:22:58.987615   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa (-rw-------)
	I0815 23:22:58.987647   30687 main.go:141] libmachine: (ha-175414-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:22:58.987665   30687 main.go:141] libmachine: (ha-175414-m03) DBG | About to run SSH command:
	I0815 23:22:58.987680   30687 main.go:141] libmachine: (ha-175414-m03) DBG | exit 0
	I0815 23:22:58.991442   30687 main.go:141] libmachine: (ha-175414-m03) DBG | SSH cmd err, output: exit status 255: 
	I0815 23:22:58.991470   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0815 23:22:58.991482   30687 main.go:141] libmachine: (ha-175414-m03) DBG | command : exit 0
	I0815 23:22:58.991489   30687 main.go:141] libmachine: (ha-175414-m03) DBG | err     : exit status 255
	I0815 23:22:58.991498   30687 main.go:141] libmachine: (ha-175414-m03) DBG | output  : 
	I0815 23:23:01.992175   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Getting to WaitForSSH function...
	I0815 23:23:01.994624   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:01.994987   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:01.995016   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:01.995182   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using SSH client type: external
	I0815 23:23:01.995210   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa (-rw-------)
	I0815 23:23:01.995242   30687 main.go:141] libmachine: (ha-175414-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 23:23:01.995259   30687 main.go:141] libmachine: (ha-175414-m03) DBG | About to run SSH command:
	I0815 23:23:01.995272   30687 main.go:141] libmachine: (ha-175414-m03) DBG | exit 0
	I0815 23:23:02.118140   30687 main.go:141] libmachine: (ha-175414-m03) DBG | SSH cmd err, output: <nil>: 
	I0815 23:23:02.118385   30687 main.go:141] libmachine: (ha-175414-m03) KVM machine creation complete!
	I0815 23:23:02.118681   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetConfigRaw
	I0815 23:23:02.119178   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:02.119358   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:02.119520   30687 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 23:23:02.119535   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:23:02.120637   30687 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 23:23:02.120649   30687 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 23:23:02.120654   30687 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 23:23:02.120660   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.123775   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.124135   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.124168   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.124321   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.124494   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.124674   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.124825   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.125013   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.125217   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.125230   30687 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 23:23:02.225333   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:23:02.225359   30687 main.go:141] libmachine: Detecting the provisioner...
	I0815 23:23:02.225367   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.228065   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.228446   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.228478   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.228618   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.228825   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.228946   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.229104   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.229228   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.229406   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.229418   30687 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 23:23:02.330731   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 23:23:02.330812   30687 main.go:141] libmachine: found compatible host: buildroot
	I0815 23:23:02.330822   30687 main.go:141] libmachine: Provisioning with buildroot...
	I0815 23:23:02.330833   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetMachineName
	I0815 23:23:02.331140   30687 buildroot.go:166] provisioning hostname "ha-175414-m03"
	I0815 23:23:02.331169   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetMachineName
	I0815 23:23:02.331351   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.334241   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.334719   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.334749   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.334925   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.335106   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.335247   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.335344   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.335520   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.335714   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.335728   30687 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-175414-m03 && echo "ha-175414-m03" | sudo tee /etc/hostname
	I0815 23:23:02.449420   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414-m03
	
	I0815 23:23:02.449447   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.452111   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.452479   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.452510   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.452680   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.452890   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.453043   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.453167   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.453345   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.453513   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.453529   30687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-175414-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-175414-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-175414-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:23:02.563978   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:23:02.564017   30687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:23:02.564041   30687 buildroot.go:174] setting up certificates
	I0815 23:23:02.564052   30687 provision.go:84] configureAuth start
	I0815 23:23:02.564067   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetMachineName
	I0815 23:23:02.564315   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:23:02.567178   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.567502   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.567531   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.567653   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.569617   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.569965   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.569985   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.570137   30687 provision.go:143] copyHostCerts
	I0815 23:23:02.570168   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:23:02.570207   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:23:02.570219   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:23:02.570308   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:23:02.570401   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:23:02.570425   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:23:02.570435   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:23:02.570472   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:23:02.570559   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:23:02.570582   30687 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:23:02.570592   30687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:23:02.570626   30687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:23:02.570693   30687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.ha-175414-m03 san=[127.0.0.1 192.168.39.100 ha-175414-m03 localhost minikube]
	I0815 23:23:02.675214   30687 provision.go:177] copyRemoteCerts
	I0815 23:23:02.675265   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:23:02.675287   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.677993   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.678328   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.678359   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.678505   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.678710   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.678893   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.679033   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:23:02.760755   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:23:02.760833   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:23:02.786303   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:23:02.786368   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 23:23:02.811650   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:23:02.811736   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 23:23:02.836691   30687 provision.go:87] duration metric: took 272.627832ms to configureAuth
	I0815 23:23:02.836722   30687 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:23:02.836967   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:23:02.837035   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:02.839632   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.840085   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:02.840123   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:02.840303   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:02.840494   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.840651   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:02.840786   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:02.840978   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:02.841157   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:02.841178   30687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:23:03.107147   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:23:03.107176   30687 main.go:141] libmachine: Checking connection to Docker...
	I0815 23:23:03.107185   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetURL
	I0815 23:23:03.108442   30687 main.go:141] libmachine: (ha-175414-m03) DBG | Using libvirt version 6000000
	I0815 23:23:03.110717   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.111067   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.111088   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.111217   30687 main.go:141] libmachine: Docker is up and running!
	I0815 23:23:03.111234   30687 main.go:141] libmachine: Reticulating splines...
	I0815 23:23:03.111240   30687 client.go:171] duration metric: took 26.148784091s to LocalClient.Create
	I0815 23:23:03.111265   30687 start.go:167] duration metric: took 26.148842714s to libmachine.API.Create "ha-175414"
	I0815 23:23:03.111276   30687 start.go:293] postStartSetup for "ha-175414-m03" (driver="kvm2")
	I0815 23:23:03.111287   30687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:23:03.111303   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.111538   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:23:03.111566   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:03.113827   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.114157   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.114184   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.114308   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:03.114472   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.114581   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:03.114712   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:23:03.197278   30687 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:23:03.201711   30687 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:23:03.201738   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:23:03.201809   30687 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:23:03.201916   30687 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:23:03.201928   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:23:03.202029   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:23:03.212940   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:23:03.237483   30687 start.go:296] duration metric: took 126.192315ms for postStartSetup
	I0815 23:23:03.237538   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetConfigRaw
	I0815 23:23:03.238123   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:23:03.240597   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.240969   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.241001   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.241259   30687 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:23:03.241452   30687 start.go:128] duration metric: took 26.296987189s to createHost
	I0815 23:23:03.241473   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:03.243730   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.244074   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.244102   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.244303   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:03.244467   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.244578   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.244706   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:03.244839   30687 main.go:141] libmachine: Using SSH client type: native
	I0815 23:23:03.244992   30687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0815 23:23:03.245003   30687 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:23:03.346707   30687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764183.323470372
	
	I0815 23:23:03.346734   30687 fix.go:216] guest clock: 1723764183.323470372
	I0815 23:23:03.346745   30687 fix.go:229] Guest: 2024-08-15 23:23:03.323470372 +0000 UTC Remote: 2024-08-15 23:23:03.241463342 +0000 UTC m=+144.142728965 (delta=82.00703ms)
	I0815 23:23:03.346766   30687 fix.go:200] guest clock delta is within tolerance: 82.00703ms
	I0815 23:23:03.346778   30687 start.go:83] releasing machines lock for "ha-175414-m03", held for 26.402424779s
	I0815 23:23:03.346804   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.347066   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:23:03.349497   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.349866   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.349894   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.352155   30687 out.go:177] * Found network options:
	I0815 23:23:03.353454   30687 out.go:177]   - NO_PROXY=192.168.39.67,192.168.39.19
	W0815 23:23:03.354600   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 23:23:03.354620   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 23:23:03.354633   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.355155   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.355330   30687 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:23:03.355443   30687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:23:03.355480   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	W0815 23:23:03.355564   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 23:23:03.355587   30687 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 23:23:03.355650   30687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:23:03.355667   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:23:03.358223   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.358485   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.358612   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.358633   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.358803   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:03.358943   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:03.358970   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:03.358988   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.359165   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:23:03.359183   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:03.359409   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:23:03.359409   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:23:03.359567   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:23:03.359722   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:23:03.600165   30687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:23:03.606470   30687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:23:03.606543   30687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:23:03.624369   30687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 23:23:03.624398   30687 start.go:495] detecting cgroup driver to use...
	I0815 23:23:03.624467   30687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:23:03.641972   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:23:03.657096   30687 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:23:03.657151   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:23:03.672682   30687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:23:03.687557   30687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:23:03.817290   30687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:23:03.966704   30687 docker.go:233] disabling docker service ...
	I0815 23:23:03.966784   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:23:03.982293   30687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:23:03.996779   30687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:23:04.139971   30687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:23:04.275280   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:23:04.290218   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:23:04.309906   30687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:23:04.309964   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.320966   30687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:23:04.321031   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.332813   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.344559   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.355880   30687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:23:04.367959   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.379727   30687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.397354   30687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:23:04.408561   30687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:23:04.419480   30687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 23:23:04.419547   30687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 23:23:04.435676   30687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:23:04.446099   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:23:04.585087   30687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:23:04.744693   30687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:23:04.744756   30687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:23:04.750939   30687 start.go:563] Will wait 60s for crictl version
	I0815 23:23:04.750998   30687 ssh_runner.go:195] Run: which crictl
	I0815 23:23:04.755210   30687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:23:04.794168   30687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:23:04.794259   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:23:04.823208   30687 ssh_runner.go:195] Run: crio --version
	I0815 23:23:04.853836   30687 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:23:04.855391   30687 out.go:177]   - env NO_PROXY=192.168.39.67
	I0815 23:23:04.856666   30687 out.go:177]   - env NO_PROXY=192.168.39.67,192.168.39.19
	I0815 23:23:04.857885   30687 main.go:141] libmachine: (ha-175414-m03) Calling .GetIP
	I0815 23:23:04.860408   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:04.860732   30687 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:23:04.860757   30687 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:23:04.860934   30687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:23:04.869156   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:23:04.887280   30687 mustload.go:65] Loading cluster: ha-175414
	I0815 23:23:04.887507   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:23:04.887822   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:23:04.887862   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:23:04.903163   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0815 23:23:04.903540   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:23:04.903965   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:23:04.903986   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:23:04.904299   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:23:04.904480   30687 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:23:04.905944   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:23:04.906210   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:23:04.906242   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:23:04.920592   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0815 23:23:04.921008   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:23:04.921478   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:23:04.921500   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:23:04.921791   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:23:04.921982   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:23:04.922134   30687 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414 for IP: 192.168.39.100
	I0815 23:23:04.922146   30687 certs.go:194] generating shared ca certs ...
	I0815 23:23:04.922162   30687 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:23:04.922336   30687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:23:04.922385   30687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:23:04.922398   30687 certs.go:256] generating profile certs ...
	I0815 23:23:04.922492   30687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key
	I0815 23:23:04.922524   30687 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.88ea30ef
	I0815 23:23:04.922544   30687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.88ea30ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.19 192.168.39.100 192.168.39.254]
	I0815 23:23:05.013221   30687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.88ea30ef ...
	I0815 23:23:05.013250   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.88ea30ef: {Name:mke9ca6dedb4237b644aef94ccf2d01f0d66f5fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:23:05.013458   30687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.88ea30ef ...
	I0815 23:23:05.013474   30687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.88ea30ef: {Name:mkf1272ec8ffcdb7dd347b9fd6444ff28e322e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:23:05.013572   30687 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.88ea30ef -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt
	I0815 23:23:05.013715   30687 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.88ea30ef -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key
	I0815 23:23:05.013910   30687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key
	I0815 23:23:05.013930   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:23:05.013947   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:23:05.013966   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:23:05.013984   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:23:05.014001   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:23:05.014018   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:23:05.014033   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:23:05.014051   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:23:05.014107   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:23:05.014143   30687 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:23:05.014156   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:23:05.014191   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:23:05.014222   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:23:05.014251   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:23:05.014305   30687 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:23:05.014340   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:23:05.014360   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:23:05.014378   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:23:05.014417   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:23:05.017367   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:23:05.017796   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:23:05.017826   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:23:05.018019   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:23:05.018193   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:23:05.018336   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:23:05.018484   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:23:05.094168   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 23:23:05.099428   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 23:23:05.111938   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 23:23:05.116389   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 23:23:05.127334   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 23:23:05.131814   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 23:23:05.143360   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 23:23:05.148404   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 23:23:05.159613   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 23:23:05.165596   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 23:23:05.182986   30687 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 23:23:05.187302   30687 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 23:23:05.198315   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:23:05.224068   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:23:05.250525   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:23:05.278089   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:23:05.306288   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0815 23:23:05.333737   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 23:23:05.358555   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:23:05.385743   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:23:05.411860   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:23:05.438272   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:23:05.462064   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:23:05.487122   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 23:23:05.503694   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 23:23:05.521140   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 23:23:05.538295   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 23:23:05.557736   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 23:23:05.576230   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 23:23:05.594248   30687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 23:23:05.611911   30687 ssh_runner.go:195] Run: openssl version
	I0815 23:23:05.617774   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:23:05.628998   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:23:05.633451   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:23:05.633510   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:23:05.639819   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:23:05.650555   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:23:05.661350   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:23:05.666039   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:23:05.666096   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:23:05.671902   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:23:05.682898   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:23:05.693953   30687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:23:05.698591   30687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:23:05.698637   30687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:23:05.704367   30687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:23:05.715644   30687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:23:05.719985   30687 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 23:23:05.720047   30687 kubeadm.go:934] updating node {m03 192.168.39.100 8443 v1.31.0 crio true true} ...
	I0815 23:23:05.720143   30687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-175414-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:23:05.720177   30687 kube-vip.go:115] generating kube-vip config ...
	I0815 23:23:05.720220   30687 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 23:23:05.738353   30687 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 23:23:05.738410   30687 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 23:23:05.738456   30687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:23:05.748502   30687 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 23:23:05.748570   30687 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 23:23:05.759211   30687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 23:23:05.759224   30687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0815 23:23:05.759239   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 23:23:05.759260   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:23:05.759316   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 23:23:05.759215   30687 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0815 23:23:05.759365   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 23:23:05.759418   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 23:23:05.763829   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 23:23:05.763856   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 23:23:05.802101   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 23:23:05.802113   30687 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 23:23:05.802158   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 23:23:05.802230   30687 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 23:23:05.864204   30687 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 23:23:05.864247   30687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 23:23:06.593208   30687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 23:23:06.603302   30687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 23:23:06.621144   30687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:23:06.639022   30687 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 23:23:06.655857   30687 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 23:23:06.659810   30687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 23:23:06.672850   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:23:06.822445   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:23:06.840787   30687 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:23:06.841164   30687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:23:06.841200   30687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:23:06.860009   30687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42183
	I0815 23:23:06.860431   30687 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:23:06.860886   30687 main.go:141] libmachine: Using API Version  1
	I0815 23:23:06.860900   30687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:23:06.861211   30687 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:23:06.861418   30687 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:23:06.861570   30687 start.go:317] joinCluster: &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:23:06.861689   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 23:23:06.861709   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:23:06.864542   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:23:06.864967   30687 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:23:06.864993   30687 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:23:06.865126   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:23:06.865303   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:23:06.865563   30687 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:23:06.865753   30687 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:23:07.015908   30687 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:23:07.015962   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4b87fr.idfvqj3ihtgii9y0 --discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-175414-m03 --control-plane --apiserver-advertise-address=192.168.39.100 --apiserver-bind-port=8443"
	I0815 23:23:30.056290   30687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4b87fr.idfvqj3ihtgii9y0 --discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-175414-m03 --control-plane --apiserver-advertise-address=192.168.39.100 --apiserver-bind-port=8443": (23.040299675s)
	I0815 23:23:30.056328   30687 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 23:23:30.701633   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-175414-m03 minikube.k8s.io/updated_at=2024_08_15T23_23_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=ha-175414 minikube.k8s.io/primary=false
	I0815 23:23:30.855501   30687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-175414-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 23:23:30.992421   30687 start.go:319] duration metric: took 24.13084471s to joinCluster
	I0815 23:23:30.992489   30687 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 23:23:30.992862   30687 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:23:30.994013   30687 out.go:177] * Verifying Kubernetes components...
	I0815 23:23:30.995518   30687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:23:31.248956   30687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:23:31.299741   30687 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:23:31.300067   30687 kapi.go:59] client config for ha-175414: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.crt", KeyFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key", CAFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 23:23:31.300137   30687 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.67:8443
	I0815 23:23:31.300429   30687 node_ready.go:35] waiting up to 6m0s for node "ha-175414-m03" to be "Ready" ...
	I0815 23:23:31.300512   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:31.300522   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:31.300533   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:31.300541   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:31.304697   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:31.800646   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:31.800673   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:31.800684   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:31.800690   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:31.821927   30687 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0815 23:23:32.300818   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:32.300841   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:32.300851   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:32.300855   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:32.304922   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:32.800861   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:32.800887   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:32.800899   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:32.800905   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:32.805086   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:33.301448   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:33.301474   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:33.301486   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:33.301491   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:33.305342   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:33.306020   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:33.801280   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:33.801311   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:33.801328   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:33.801332   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:33.804915   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:34.301005   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:34.301028   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:34.301038   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:34.301042   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:34.304808   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:34.801270   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:34.801294   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:34.801302   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:34.801306   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:34.804891   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:35.300789   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:35.300826   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:35.300836   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:35.300842   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:35.304772   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:35.800882   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:35.800904   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:35.800912   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:35.800916   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:35.804995   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:35.805749   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:36.301069   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:36.301096   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:36.301107   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:36.301112   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:36.304719   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:36.801592   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:36.801614   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:36.801639   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:36.801645   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:36.808498   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:23:37.301354   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:37.301377   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:37.301384   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:37.301388   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:37.304801   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:37.801034   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:37.801060   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:37.801071   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:37.801076   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:37.804352   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:38.301589   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:38.301612   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:38.301620   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:38.301625   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:38.304817   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:38.305393   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:38.800761   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:38.800781   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:38.800790   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:38.800797   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:38.804278   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:39.301485   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:39.301503   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:39.301510   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:39.301515   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:39.305090   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:39.801519   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:39.801547   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:39.801557   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:39.801562   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:39.805725   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:40.300746   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:40.300783   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:40.300795   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:40.300800   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:40.304408   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:40.801407   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:40.801430   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:40.801439   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:40.801442   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:40.804765   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:40.805875   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:41.301344   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:41.301366   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:41.301374   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:41.301378   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:41.305023   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:41.801343   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:41.801366   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:41.801374   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:41.801378   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:41.804510   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:42.300635   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:42.300657   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:42.300669   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:42.300675   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:42.308550   30687 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 23:23:42.800687   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:42.800706   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:42.800715   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:42.800719   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:42.806893   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:23:42.807521   30687 node_ready.go:53] node "ha-175414-m03" has status "Ready":"False"
	I0815 23:23:43.300678   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:43.300704   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:43.300712   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:43.300717   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:43.304518   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:43.800646   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:43.800664   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:43.800675   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:43.800681   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:43.804280   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:44.301629   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:44.301652   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.301662   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.301667   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.304967   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:44.800870   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:44.800891   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.800899   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.800904   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.805708   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:44.806854   30687 node_ready.go:49] node "ha-175414-m03" has status "Ready":"True"
	I0815 23:23:44.806871   30687 node_ready.go:38] duration metric: took 13.506426047s for node "ha-175414-m03" to be "Ready" ...
	I0815 23:23:44.806879   30687 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:23:44.806963   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:44.806974   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.806981   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.806985   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.813632   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:23:44.824518   30687 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.824604   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vkm5s
	I0815 23:23:44.824616   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.824626   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.824634   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.829322   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:44.830013   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:44.830029   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.830037   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.830046   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.832511   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.833213   30687 pod_ready.go:93] pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:44.833231   30687 pod_ready.go:82] duration metric: took 8.687111ms for pod "coredns-6f6b679f8f-vkm5s" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.833240   30687 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.833288   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zrv4c
	I0815 23:23:44.833296   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.833303   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.833307   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.835836   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.836440   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:44.836454   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.836464   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.836469   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.838784   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.839261   30687 pod_ready.go:93] pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:44.839277   30687 pod_ready.go:82] duration metric: took 6.030455ms for pod "coredns-6f6b679f8f-zrv4c" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.839287   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.839338   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414
	I0815 23:23:44.839347   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.839357   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.839364   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.841589   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.842021   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:44.842036   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.842053   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.842060   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.844617   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.845030   30687 pod_ready.go:93] pod "etcd-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:44.845049   30687 pod_ready.go:82] duration metric: took 5.755224ms for pod "etcd-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.845057   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.845107   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414-m02
	I0815 23:23:44.845115   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.845121   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.845125   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.847644   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:44.848244   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:44.848256   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:44.848263   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:44.848267   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:44.852966   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:44.853609   30687 pod_ready.go:93] pod "etcd-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:44.853624   30687 pod_ready.go:82] duration metric: took 8.561513ms for pod "etcd-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:44.853633   30687 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.001545   30687 request.go:632] Waited for 147.837871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414-m03
	I0815 23:23:45.001611   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/etcd-ha-175414-m03
	I0815 23:23:45.001624   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.001638   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.001648   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.005990   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:45.200923   30687 request.go:632] Waited for 194.292719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:45.200975   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:45.200980   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.200988   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.200991   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.204867   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:45.205962   30687 pod_ready.go:93] pod "etcd-ha-175414-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:45.205980   30687 pod_ready.go:82] duration metric: took 352.340987ms for pod "etcd-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.205996   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.401080   30687 request.go:632] Waited for 195.010527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414
	I0815 23:23:45.401134   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414
	I0815 23:23:45.401143   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.401153   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.401162   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.405069   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:45.600978   30687 request.go:632] Waited for 194.997854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:45.601036   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:45.601047   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.601058   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.601065   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.604238   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:45.604794   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:45.604813   30687 pod_ready.go:82] duration metric: took 398.811839ms for pod "kube-apiserver-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.604822   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:45.800898   30687 request.go:632] Waited for 195.997321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m02
	I0815 23:23:45.800956   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m02
	I0815 23:23:45.800964   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:45.800975   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:45.800982   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:45.805329   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:46.001563   30687 request.go:632] Waited for 195.379594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:46.001656   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:46.001669   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.001679   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.001689   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.005268   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.005756   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:46.005778   30687 pod_ready.go:82] duration metric: took 400.948427ms for pod "kube-apiserver-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.005790   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.200895   30687 request.go:632] Waited for 195.01624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m03
	I0815 23:23:46.200955   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-175414-m03
	I0815 23:23:46.200960   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.200970   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.200976   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.204629   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.401150   30687 request.go:632] Waited for 195.373693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:46.401206   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:46.401211   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.401230   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.401234   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.404647   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.405529   30687 pod_ready.go:93] pod "kube-apiserver-ha-175414-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:46.405547   30687 pod_ready.go:82] duration metric: took 399.747287ms for pod "kube-apiserver-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.405557   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.601364   30687 request.go:632] Waited for 195.751664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414
	I0815 23:23:46.601424   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414
	I0815 23:23:46.601445   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.601460   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.601465   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.605197   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.801302   30687 request.go:632] Waited for 195.345088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:46.801352   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:46.801357   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:46.801364   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:46.801368   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:46.804720   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:46.805266   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:46.805285   30687 pod_ready.go:82] duration metric: took 399.721484ms for pod "kube-controller-manager-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:46.805294   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.001221   30687 request.go:632] Waited for 195.863944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m02
	I0815 23:23:47.001305   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m02
	I0815 23:23:47.001315   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.001325   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.001335   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.005415   30687 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 23:23:47.201592   30687 request.go:632] Waited for 195.358667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:47.201666   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:47.201673   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.201682   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.201690   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.205325   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:47.205833   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:47.205870   30687 pod_ready.go:82] duration metric: took 400.568411ms for pod "kube-controller-manager-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.205884   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.401823   30687 request.go:632] Waited for 195.870502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m03
	I0815 23:23:47.401909   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-175414-m03
	I0815 23:23:47.401915   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.401922   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.401928   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.405203   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:47.601373   30687 request.go:632] Waited for 195.370984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:47.601443   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:47.601451   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.601461   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.601468   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.604549   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:47.605090   30687 pod_ready.go:93] pod "kube-controller-manager-ha-175414-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:47.605115   30687 pod_ready.go:82] duration metric: took 399.218678ms for pod "kube-controller-manager-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.605127   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4frcn" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:47.801148   30687 request.go:632] Waited for 195.940242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frcn
	I0815 23:23:47.801215   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frcn
	I0815 23:23:47.801220   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:47.801228   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:47.801233   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:47.805182   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.001380   30687 request.go:632] Waited for 195.387295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:48.001450   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:48.001457   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.001465   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.001471   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.004387   30687 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 23:23:48.004874   30687 pod_ready.go:93] pod "kube-proxy-4frcn" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:48.004900   30687 pod_ready.go:82] duration metric: took 399.761857ms for pod "kube-proxy-4frcn" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.004909   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dcnmc" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.200924   30687 request.go:632] Waited for 195.916214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcnmc
	I0815 23:23:48.200988   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcnmc
	I0815 23:23:48.200995   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.201004   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.201010   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.204684   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.400864   30687 request.go:632] Waited for 195.278732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:48.400912   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:48.400917   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.400924   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.400928   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.404359   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.405073   30687 pod_ready.go:93] pod "kube-proxy-dcnmc" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:48.405091   30687 pod_ready.go:82] duration metric: took 400.176798ms for pod "kube-proxy-dcnmc" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.405100   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qtps7" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.601203   30687 request.go:632] Waited for 196.039174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtps7
	I0815 23:23:48.601263   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qtps7
	I0815 23:23:48.601268   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.601276   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.601283   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.604652   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.800831   30687 request.go:632] Waited for 195.271767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:48.800905   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:48.800912   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:48.800921   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:48.800929   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:48.804469   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:48.805110   30687 pod_ready.go:93] pod "kube-proxy-qtps7" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:48.805127   30687 pod_ready.go:82] duration metric: took 400.021436ms for pod "kube-proxy-qtps7" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:48.805135   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.001327   30687 request.go:632] Waited for 196.131395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414
	I0815 23:23:49.001419   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414
	I0815 23:23:49.001429   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.001437   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.001441   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.005164   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:49.201496   30687 request.go:632] Waited for 195.769695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:49.201579   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414
	I0815 23:23:49.201587   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.201598   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.201605   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.204930   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:49.205615   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:49.205641   30687 pod_ready.go:82] duration metric: took 400.498233ms for pod "kube-scheduler-ha-175414" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.205653   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.400850   30687 request.go:632] Waited for 195.133191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m02
	I0815 23:23:49.400934   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m02
	I0815 23:23:49.400947   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.400958   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.400963   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.404499   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:49.601511   30687 request.go:632] Waited for 196.355466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:49.601600   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m02
	I0815 23:23:49.601610   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.601622   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.601632   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.605256   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:49.605716   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:49.605734   30687 pod_ready.go:82] duration metric: took 400.074118ms for pod "kube-scheduler-ha-175414-m02" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.605744   30687 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:49.801791   30687 request.go:632] Waited for 195.986943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m03
	I0815 23:23:49.801898   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-175414-m03
	I0815 23:23:49.801911   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:49.801921   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:49.801927   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:49.805859   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:50.001482   30687 request.go:632] Waited for 194.855782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:50.001552   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes/ha-175414-m03
	I0815 23:23:50.001559   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.001570   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.001579   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.004961   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:50.006147   30687 pod_ready.go:93] pod "kube-scheduler-ha-175414-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 23:23:50.006169   30687 pod_ready.go:82] duration metric: took 400.418594ms for pod "kube-scheduler-ha-175414-m03" in "kube-system" namespace to be "Ready" ...
	I0815 23:23:50.006184   30687 pod_ready.go:39] duration metric: took 5.199294359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 23:23:50.006204   30687 api_server.go:52] waiting for apiserver process to appear ...
	I0815 23:23:50.006268   30687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:23:50.022008   30687 api_server.go:72] duration metric: took 19.029466222s to wait for apiserver process to appear ...
	I0815 23:23:50.022041   30687 api_server.go:88] waiting for apiserver healthz status ...
	I0815 23:23:50.022061   30687 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0815 23:23:50.026169   30687 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0815 23:23:50.026240   30687 round_trippers.go:463] GET https://192.168.39.67:8443/version
	I0815 23:23:50.026249   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.026257   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.026261   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.026974   30687 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 23:23:50.027131   30687 api_server.go:141] control plane version: v1.31.0
	I0815 23:23:50.027149   30687 api_server.go:131] duration metric: took 5.102316ms to wait for apiserver health ...
	I0815 23:23:50.027156   30687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 23:23:50.201552   30687 request.go:632] Waited for 174.330625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:50.201608   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:50.201614   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.201622   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.201626   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.207806   30687 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 23:23:50.216059   30687 system_pods.go:59] 24 kube-system pods found
	I0815 23:23:50.216091   30687 system_pods.go:61] "coredns-6f6b679f8f-vkm5s" [1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c] Running
	I0815 23:23:50.216098   30687 system_pods.go:61] "coredns-6f6b679f8f-zrv4c" [97d399d0-871e-4e59-8c4d-093b5a29a107] Running
	I0815 23:23:50.216104   30687 system_pods.go:61] "etcd-ha-175414" [8358595a-b7fc-40b0-b3a1-8bce46f618dd] Running
	I0815 23:23:50.216108   30687 system_pods.go:61] "etcd-ha-175414-m02" [fd9e81e9-bfd2-4040-9425-06a84b9c3dda] Running
	I0815 23:23:50.216114   30687 system_pods.go:61] "etcd-ha-175414-m03" [38df15d2-57c3-4c67-ac95-fee5aa93ec03] Running
	I0815 23:23:50.216119   30687 system_pods.go:61] "kindnet-47nts" [969ed4f0-c372-4d22-ba84-cfcd5774f1cf] Running
	I0815 23:23:50.216123   30687 system_pods.go:61] "kindnet-fp2gc" [b52bd53f-e131-4859-9825-3596c8dbab8f] Running
	I0815 23:23:50.216129   30687 system_pods.go:61] "kindnet-jjcdm" [534a226d-c0b6-4a2f-8b2c-27921c9e1aca] Running
	I0815 23:23:50.216134   30687 system_pods.go:61] "kube-apiserver-ha-175414" [74c0c52d-72f6-425e-ba1e-047ebb890ed4] Running
	I0815 23:23:50.216140   30687 system_pods.go:61] "kube-apiserver-ha-175414-m02" [019a6c53-1d80-40a3-93ea-6179c12e17ed] Running
	I0815 23:23:50.216147   30687 system_pods.go:61] "kube-apiserver-ha-175414-m03" [26088bb4-d35b-41a0-9eb0-688801e214fd] Running
	I0815 23:23:50.216154   30687 system_pods.go:61] "kube-controller-manager-ha-175414" [88aeb420-f593-4e18-8149-6fe48fd85b7d] Running
	I0815 23:23:50.216163   30687 system_pods.go:61] "kube-controller-manager-ha-175414-m02" [be3e762b-556f-4881-9a29-c9a867ccb5e7] Running
	I0815 23:23:50.216170   30687 system_pods.go:61] "kube-controller-manager-ha-175414-m03" [a6b31b93-6048-43ea-8e33-e33fb2eeaf43] Running
	I0815 23:23:50.216175   30687 system_pods.go:61] "kube-proxy-4frcn" [2831334a-a379-4f6d-ada3-53a01fc6f65e] Running
	I0815 23:23:50.216182   30687 system_pods.go:61] "kube-proxy-dcnmc" [572a1e80-23b0-4cb9-bfab-067b6853226d] Running
	I0815 23:23:50.216190   30687 system_pods.go:61] "kube-proxy-qtps7" [c5b0adc1-50ae-4b09-8704-1449c241d874] Running
	I0815 23:23:50.216195   30687 system_pods.go:61] "kube-scheduler-ha-175414" [7463fcbb-2a5f-4101-8b25-f72c74ca515a] Running
	I0815 23:23:50.216205   30687 system_pods.go:61] "kube-scheduler-ha-175414-m02" [1e5715dc-154a-4669-8a4e-986bb989a16b] Running
	I0815 23:23:50.216213   30687 system_pods.go:61] "kube-scheduler-ha-175414-m03" [06298593-3572-4444-a52c-1594e3a4ab79] Running
	I0815 23:23:50.216218   30687 system_pods.go:61] "kube-vip-ha-175414" [6b98571e-8ad5-45e0-acbc-d0e875647a69] Running
	I0815 23:23:50.216226   30687 system_pods.go:61] "kube-vip-ha-175414-m02" [4877d97c-4adb-4ce8-813f-0819e8a96b5a] Running
	I0815 23:23:50.216230   30687 system_pods.go:61] "kube-vip-ha-175414-m03" [40f35284-b260-46c5-9766-d8a59b5a80cc] Running
	I0815 23:23:50.216235   30687 system_pods.go:61] "storage-provisioner" [7042d764-6043-449c-a1e9-aaa28256c579] Running
	I0815 23:23:50.216245   30687 system_pods.go:74] duration metric: took 189.083233ms to wait for pod list to return data ...
	I0815 23:23:50.216258   30687 default_sa.go:34] waiting for default service account to be created ...
	I0815 23:23:50.401690   30687 request.go:632] Waited for 185.360404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I0815 23:23:50.401741   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/default/serviceaccounts
	I0815 23:23:50.401746   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.401753   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.401756   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.405572   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:50.405677   30687 default_sa.go:45] found service account: "default"
	I0815 23:23:50.405690   30687 default_sa.go:55] duration metric: took 189.426177ms for default service account to be created ...
	I0815 23:23:50.405700   30687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 23:23:50.600989   30687 request.go:632] Waited for 195.210751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:50.601046   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/namespaces/kube-system/pods
	I0815 23:23:50.601051   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.601058   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.601062   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.606926   30687 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 23:23:50.613402   30687 system_pods.go:86] 24 kube-system pods found
	I0815 23:23:50.613430   30687 system_pods.go:89] "coredns-6f6b679f8f-vkm5s" [1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c] Running
	I0815 23:23:50.613436   30687 system_pods.go:89] "coredns-6f6b679f8f-zrv4c" [97d399d0-871e-4e59-8c4d-093b5a29a107] Running
	I0815 23:23:50.613441   30687 system_pods.go:89] "etcd-ha-175414" [8358595a-b7fc-40b0-b3a1-8bce46f618dd] Running
	I0815 23:23:50.613446   30687 system_pods.go:89] "etcd-ha-175414-m02" [fd9e81e9-bfd2-4040-9425-06a84b9c3dda] Running
	I0815 23:23:50.613450   30687 system_pods.go:89] "etcd-ha-175414-m03" [38df15d2-57c3-4c67-ac95-fee5aa93ec03] Running
	I0815 23:23:50.613453   30687 system_pods.go:89] "kindnet-47nts" [969ed4f0-c372-4d22-ba84-cfcd5774f1cf] Running
	I0815 23:23:50.613458   30687 system_pods.go:89] "kindnet-fp2gc" [b52bd53f-e131-4859-9825-3596c8dbab8f] Running
	I0815 23:23:50.613464   30687 system_pods.go:89] "kindnet-jjcdm" [534a226d-c0b6-4a2f-8b2c-27921c9e1aca] Running
	I0815 23:23:50.613469   30687 system_pods.go:89] "kube-apiserver-ha-175414" [74c0c52d-72f6-425e-ba1e-047ebb890ed4] Running
	I0815 23:23:50.613475   30687 system_pods.go:89] "kube-apiserver-ha-175414-m02" [019a6c53-1d80-40a3-93ea-6179c12e17ed] Running
	I0815 23:23:50.613480   30687 system_pods.go:89] "kube-apiserver-ha-175414-m03" [26088bb4-d35b-41a0-9eb0-688801e214fd] Running
	I0815 23:23:50.613487   30687 system_pods.go:89] "kube-controller-manager-ha-175414" [88aeb420-f593-4e18-8149-6fe48fd85b7d] Running
	I0815 23:23:50.613496   30687 system_pods.go:89] "kube-controller-manager-ha-175414-m02" [be3e762b-556f-4881-9a29-c9a867ccb5e7] Running
	I0815 23:23:50.613502   30687 system_pods.go:89] "kube-controller-manager-ha-175414-m03" [a6b31b93-6048-43ea-8e33-e33fb2eeaf43] Running
	I0815 23:23:50.613510   30687 system_pods.go:89] "kube-proxy-4frcn" [2831334a-a379-4f6d-ada3-53a01fc6f65e] Running
	I0815 23:23:50.613514   30687 system_pods.go:89] "kube-proxy-dcnmc" [572a1e80-23b0-4cb9-bfab-067b6853226d] Running
	I0815 23:23:50.613518   30687 system_pods.go:89] "kube-proxy-qtps7" [c5b0adc1-50ae-4b09-8704-1449c241d874] Running
	I0815 23:23:50.613521   30687 system_pods.go:89] "kube-scheduler-ha-175414" [7463fcbb-2a5f-4101-8b25-f72c74ca515a] Running
	I0815 23:23:50.613525   30687 system_pods.go:89] "kube-scheduler-ha-175414-m02" [1e5715dc-154a-4669-8a4e-986bb989a16b] Running
	I0815 23:23:50.613528   30687 system_pods.go:89] "kube-scheduler-ha-175414-m03" [06298593-3572-4444-a52c-1594e3a4ab79] Running
	I0815 23:23:50.613532   30687 system_pods.go:89] "kube-vip-ha-175414" [6b98571e-8ad5-45e0-acbc-d0e875647a69] Running
	I0815 23:23:50.613537   30687 system_pods.go:89] "kube-vip-ha-175414-m02" [4877d97c-4adb-4ce8-813f-0819e8a96b5a] Running
	I0815 23:23:50.613540   30687 system_pods.go:89] "kube-vip-ha-175414-m03" [40f35284-b260-46c5-9766-d8a59b5a80cc] Running
	I0815 23:23:50.613543   30687 system_pods.go:89] "storage-provisioner" [7042d764-6043-449c-a1e9-aaa28256c579] Running
	I0815 23:23:50.613549   30687 system_pods.go:126] duration metric: took 207.843363ms to wait for k8s-apps to be running ...
	I0815 23:23:50.613558   30687 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 23:23:50.613611   30687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:23:50.629314   30687 system_svc.go:56] duration metric: took 15.74754ms WaitForService to wait for kubelet
	I0815 23:23:50.629344   30687 kubeadm.go:582] duration metric: took 19.636826655s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:23:50.629364   30687 node_conditions.go:102] verifying NodePressure condition ...
	I0815 23:23:50.801775   30687 request.go:632] Waited for 172.327841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.67:8443/api/v1/nodes
	I0815 23:23:50.801855   30687 round_trippers.go:463] GET https://192.168.39.67:8443/api/v1/nodes
	I0815 23:23:50.801863   30687 round_trippers.go:469] Request Headers:
	I0815 23:23:50.801874   30687 round_trippers.go:473]     Accept: application/json, */*
	I0815 23:23:50.801883   30687 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 23:23:50.805163   30687 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 23:23:50.806331   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:23:50.806355   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:23:50.806367   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:23:50.806373   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:23:50.806379   30687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 23:23:50.806385   30687 node_conditions.go:123] node cpu capacity is 2
	I0815 23:23:50.806394   30687 node_conditions.go:105] duration metric: took 177.024539ms to run NodePressure ...
	I0815 23:23:50.806412   30687 start.go:241] waiting for startup goroutines ...
	I0815 23:23:50.806440   30687 start.go:255] writing updated cluster config ...
	I0815 23:23:50.806880   30687 ssh_runner.go:195] Run: rm -f paused
	I0815 23:23:50.862887   30687 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 23:23:50.864906   30687 out.go:177] * Done! kubectl is now configured to use "ha-175414" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.778430794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9798a7ec-40bd-45f0-b5e9-b8bfbea240c7 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.779767517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3865bd06-4680-468a-b0b5-a1f22cb40a5f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.780505387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764510780474402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3865bd06-4680-468a-b0b5-a1f22cb40a5f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.781666810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de646b2a-3c39-41fb-9069-4b7f47e37086 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.781804189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de646b2a-3c39-41fb-9069-4b7f47e37086 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.782115360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764234620075693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100474735774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64,PodSandboxId:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100385963377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e,PodSandboxId:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723764100321406097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723764088513443509,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172376408
6148992845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55,PodSandboxId:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172376407625
7018114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764074424182895,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764074344815634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0,PodSandboxId:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764074281578454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8,PodSandboxId:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764074310537239,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de646b2a-3c39-41fb-9069-4b7f47e37086 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.832458534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7346639d-52d0-4d0d-9e2d-7e48b0d3ba63 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.832536333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7346639d-52d0-4d0d-9e2d-7e48b0d3ba63 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.833773587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cab29861-c0e5-4740-bb8b-5edf4d1e1d8e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.834189753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764510834168736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cab29861-c0e5-4740-bb8b-5edf4d1e1d8e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.834899908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48a35246-a39e-4a6d-8887-e6eea06b9367 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.834949888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48a35246-a39e-4a6d-8887-e6eea06b9367 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.835164162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764234620075693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100474735774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64,PodSandboxId:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100385963377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e,PodSandboxId:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723764100321406097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723764088513443509,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172376408
6148992845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55,PodSandboxId:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172376407625
7018114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764074424182895,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764074344815634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0,PodSandboxId:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764074281578454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8,PodSandboxId:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764074310537239,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48a35246-a39e-4a6d-8887-e6eea06b9367 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.839584436Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ed23e17-f3af-4e7b-947d-5ab93d3aab39 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.839809482Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-ztvms,Uid:68404862-5be0-4c89-8a76-4eb9f9dc682b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723764233623404001,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:23:51.809332415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-vkm5s,Uid:1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1723764100173956430,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:39.850481660Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7042d764-6043-449c-a1e9-aaa28256c579,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723764100157687641,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T23:21:39.851222458Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-zrv4c,Uid:97d399d0-871e-4e59-8c4d-093b5a29a107,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1723764100151328627,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:39.845001584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&PodSandboxMetadata{Name:kube-proxy-4frcn,Uid:2831334a-a379-4f6d-ada3-53a01fc6f65e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723764085983499222,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-15T23:21:25.055760598Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&PodSandboxMetadata{Name:kindnet-jjcdm,Uid:534a226d-c0b6-4a2f-8b2c-27921c9e1aca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723764085981415008,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:25.050451541Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-175414,Uid:27e42bdbbf7659c494233926d7ef3e13,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1723764074110624484,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{kubernetes.io/config.hash: 27e42bdbbf7659c494233926d7ef3e13,kubernetes.io/config.seen: 2024-08-15T23:21:13.636165209Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-175414,Uid:02dd932293ae8c928398fa28db141a52,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723764074107581147,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 02dd
932293ae8c928398fa28db141a52,kubernetes.io/config.seen: 2024-08-15T23:21:13.636164281Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&PodSandboxMetadata{Name:etcd-ha-175414,Uid:88d31a53d81e2448a936fab3b5f0449d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723764074103556415,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.67:2379,kubernetes.io/config.hash: 88d31a53d81e2448a936fab3b5f0449d,kubernetes.io/config.seen: 2024-08-15T23:21:13.636157482Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&PodSandboxMetadata{Name:kube-co
ntroller-manager-ha-175414,Uid:791e1ef83a25ef60ff5fe0211ab052ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723764074092778602,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 791e1ef83a25ef60ff5fe0211ab052ac,kubernetes.io/config.seen: 2024-08-15T23:21:13.636162927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-175414,Uid:6c3f4194728ec576cf8056e92c6671ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723764074089353440,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.67:8443,kubernetes.io/config.hash: 6c3f4194728ec576cf8056e92c6671ad,kubernetes.io/config.seen: 2024-08-15T23:21:13.636161495Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2ed23e17-f3af-4e7b-947d-5ab93d3aab39 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.840628225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=372b6b87-1a4a-4c90-a71c-c01abcf0b76c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.840679554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=372b6b87-1a4a-4c90-a71c-c01abcf0b76c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.840891254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764234620075693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100474735774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64,PodSandboxId:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100385963377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e,PodSandboxId:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723764100321406097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723764088513443509,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172376408
6148992845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55,PodSandboxId:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172376407625
7018114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764074424182895,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764074344815634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0,PodSandboxId:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764074281578454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8,PodSandboxId:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764074310537239,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=372b6b87-1a4a-4c90-a71c-c01abcf0b76c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.877591316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d4ce5fc-ebe6-43b4-bd0a-6ad3e63889f8 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.877665950Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d4ce5fc-ebe6-43b4-bd0a-6ad3e63889f8 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.879144423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceae8179-3122-4a96-b5ec-ec0ba30778b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.879654773Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764510879629002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceae8179-3122-4a96-b5ec-ec0ba30778b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.880290285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36c9752e-8f24-4f5f-8c6a-a5b7753dadbc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.880354252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36c9752e-8f24-4f5f-8c6a-a5b7753dadbc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:28:30 ha-175414 crio[681]: time="2024-08-15 23:28:30.880581131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764234620075693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100474735774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64,PodSandboxId:0f2dc7e79b3c74df25a4d1ebdc2d96c530541e3e962c0c36199d5ad7eea102cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764100385963377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e,PodSandboxId:4c614a1c6c9dea073c43a9cd30ead9ad003f484689c554bd48ea1641a3a4abdc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723764100321406097,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723764088513443509,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172376408
6148992845,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55,PodSandboxId:34a71387942ef9bcbe15686c7fe9d58053c3e8ef143127344df17af40b41b882,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172376407625
7018114,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e42bdbbf7659c494233926d7ef3e13,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764074424182895,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764074344815634,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0,PodSandboxId:15475f8def71f4a6f45616da4d996e4c991a45545d8aacf02f59e373bf37a11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764074281578454,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8,PodSandboxId:6b83d3bb335b68c84fbee1c11a8d3a78b69931e4d5b0b481badf3435346f0cc7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764074310537239,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36c9752e-8f24-4f5f-8c6a-a5b7753dadbc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6f2ac1a3791a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   1555ba5313b4a       busybox-7dff88458-ztvms
	d266fdeedd2d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   33df4c1e88a57       coredns-6f6b679f8f-vkm5s
	6bdc1076f0d11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0f2dc7e79b3c7       coredns-6f6b679f8f-zrv4c
	fd145e0bce0eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   4c614a1c6c9de       storage-provisioner
	dce83cbb20557       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   1392391da1090       kindnet-jjcdm
	70eb25dbc5fac       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   51e2286f4b6df       kube-proxy-4frcn
	41980bfc0d44a       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   34a71387942ef       kube-vip-ha-175414
	aaba7057e0920       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   94e761b5a2dbf       etcd-ha-175414
	af5abf6569d1f       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   6bc6e4c03eedb       kube-scheduler-ha-175414
	b61812e4ed00f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   6b83d3bb335b6       kube-apiserver-ha-175414
	0f0f5c055e67f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   15475f8def71f       kube-controller-manager-ha-175414
	
	
	==> coredns [6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64] <==
	[INFO] 10.244.2.2:42343 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003476687s
	[INFO] 10.244.2.2:34294 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204037s
	[INFO] 10.244.2.2:41230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132845s
	[INFO] 10.244.1.2:43940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132764s
	[INFO] 10.244.1.2:35236 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096436s
	[INFO] 10.244.1.2:41499 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127607s
	[INFO] 10.244.1.2:55520 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076785s
	[INFO] 10.244.1.2:46694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099473s
	[INFO] 10.244.0.4:47376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152741s
	[INFO] 10.244.0.4:38412 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001860253s
	[INFO] 10.244.0.4:37064 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000527s
	[INFO] 10.244.0.4:57092 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096595s
	[INFO] 10.244.0.4:44776 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060092s
	[INFO] 10.244.0.4:49265 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034776s
	[INFO] 10.244.2.2:56855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153031s
	[INFO] 10.244.2.2:56811 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148425s
	[INFO] 10.244.2.2:56795 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112285s
	[INFO] 10.244.2.2:33122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109125s
	[INFO] 10.244.1.2:53479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203125s
	[INFO] 10.244.0.4:39088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127065s
	[INFO] 10.244.0.4:44479 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007416s
	[INFO] 10.244.2.2:38995 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210639s
	[INFO] 10.244.2.2:51708 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000191376s
	[INFO] 10.244.1.2:46430 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129937s
	[INFO] 10.244.1.2:41358 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094083s
	
	
	==> coredns [d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38456 - 3166 "HINFO IN 1280106060145409119.2838945066204880542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009788563s
	[INFO] 10.244.2.2:43352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000332013s
	[INFO] 10.244.2.2:55356 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230082s
	[INFO] 10.244.2.2:53708 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003726881s
	[INFO] 10.244.2.2:42627 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166307s
	[INFO] 10.244.2.2:37289 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162629s
	[INFO] 10.244.1.2:51252 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001943848s
	[INFO] 10.244.1.2:54890 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100499s
	[INFO] 10.244.1.2:34298 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001419075s
	[INFO] 10.244.0.4:33304 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001325515s
	[INFO] 10.244.0.4:42189 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073238s
	[INFO] 10.244.1.2:35312 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127561s
	[INFO] 10.244.1.2:42713 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174951s
	[INFO] 10.244.1.2:32898 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119329s
	[INFO] 10.244.0.4:58944 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116555s
	[INFO] 10.244.0.4:59435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073012s
	[INFO] 10.244.2.2:60026 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235829s
	[INFO] 10.244.2.2:58530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018432s
	[INFO] 10.244.1.2:44913 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119773s
	[INFO] 10.244.1.2:52756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123167s
	[INFO] 10.244.0.4:39480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124675s
	[INFO] 10.244.0.4:51365 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114789s
	[INFO] 10.244.0.4:49967 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068329s
	[INFO] 10.244.0.4:42637 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073642s
	
	
	==> describe nodes <==
	Name:               ha-175414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T23_21_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:28:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:24:24 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:24:24 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:24:24 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:24:24 +0000   Thu, 15 Aug 2024 23:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-175414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b0ddee9ca5943d7802a25ee6a9c7f34
	  System UUID:                7b0ddee9-ca59-43d7-802a-25ee6a9c7f34
	  Boot ID:                    a257efb5-ad21-419a-b259-592d48073d80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ztvms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-6f6b679f8f-vkm5s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m6s
	  kube-system                 coredns-6f6b679f8f-zrv4c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m6s
	  kube-system                 etcd-ha-175414                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m11s
	  kube-system                 kindnet-jjcdm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m6s
	  kube-system                 kube-apiserver-ha-175414             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-controller-manager-ha-175414    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-proxy-4frcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 kube-scheduler-ha-175414             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-vip-ha-175414                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m18s (x7 over 7m18s)  kubelet          Node ha-175414 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m18s (x8 over 7m18s)  kubelet          Node ha-175414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m18s (x8 over 7m18s)  kubelet          Node ha-175414 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m11s                  kubelet          Node ha-175414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m11s                  kubelet          Node ha-175414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s                  kubelet          Node ha-175414 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal  NodeReady                6m52s                  kubelet          Node ha-175414 status is now: NodeReady
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	
	
	Name:               ha-175414-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_22_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:22:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:25:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 23:24:16 +0000   Thu, 15 Aug 2024 23:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 23:24:16 +0000   Thu, 15 Aug 2024 23:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 23:24:16 +0000   Thu, 15 Aug 2024 23:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 23:24:16 +0000   Thu, 15 Aug 2024 23:25:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-175414-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e48881ea1334f28a03d47bf7b09ff84
	  System UUID:                1e48881e-a133-4f28-a03d-47bf7b09ff84
	  Boot ID:                    1b12d3a1-294c-4b9b-8f62-e1a31d19c9ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kt8v4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 etcd-ha-175414-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-47nts                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-175414-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-175414-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-dcnmc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-175414-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-175414-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node ha-175414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet          Node ha-175414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet          Node ha-175414-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  NodeNotReady             2m42s                  node-controller  Node ha-175414-m02 status is now: NodeNotReady
	
	
	Name:               ha-175414-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_23_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:28:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:23:57 +0000   Thu, 15 Aug 2024 23:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:23:57 +0000   Thu, 15 Aug 2024 23:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:23:57 +0000   Thu, 15 Aug 2024 23:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:23:57 +0000   Thu, 15 Aug 2024 23:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-175414-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03cd54aa1c764ef1be98b373af236f27
	  System UUID:                03cd54aa-1c76-4ef1-be98-b373af236f27
	  Boot ID:                    70b13ab6-f27f-49c0-87ea-06e9fc33a543
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-glqlv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 etcd-ha-175414-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m3s
	  kube-system                 kindnet-fp2gc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m5s
	  kube-system                 kube-apiserver-ha-175414-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-ha-175414-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-proxy-qtps7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-ha-175414-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-vip-ha-175414-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-175414-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-175414-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-175414-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal  RegisteredNode           4m55s                node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	
	
	Name:               ha-175414-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_24_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:24:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:28:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:25:01 +0000   Thu, 15 Aug 2024 23:24:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:25:01 +0000   Thu, 15 Aug 2024 23:24:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:25:01 +0000   Thu, 15 Aug 2024 23:24:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:25:01 +0000   Thu, 15 Aug 2024 23:24:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    ha-175414-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4da843156b4c43e0a4311c72833aae78
	  System UUID:                4da84315-6b4c-43e0-a431-1c72833aae78
	  Boot ID:                    2cdb3f67-21f7-46f8-9d79-849dd6359a7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6bf4q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m1s
	  kube-system                 kube-proxy-jm5fj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x2 over 4m2s)  kubelet          Node ha-175414-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x2 over 4m2s)  kubelet          Node ha-175414-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x2 over 4m2s)  kubelet          Node ha-175414-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-175414-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug15 23:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051277] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040299] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.817240] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.555052] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.598331] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 23:21] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.056390] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050948] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.198639] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.119702] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.271672] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.126980] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.023155] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.059629] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.252555] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.087359] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.483452] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.149794] kauditd_printk_skb: 38 callbacks suppressed
	[Aug15 23:22] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391] <==
	{"level":"warn","ts":"2024-08-15T23:28:30.801451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:30.901581Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:30.990633Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:30.992957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.001791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.101468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.159692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.166351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.171596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.186523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.194224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.201306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.201655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.205742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.209709Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.219956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.227903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.235564Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.240330Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.269591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.273643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.278023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.284731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.291704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:28:31.301501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:28:31 up 7 min,  0 users,  load average: 0.24, 0.25, 0.13
	Linux ha-175414 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a] <==
	I0815 23:27:59.564559       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:28:09.563644       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:28:09.563806       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:28:09.564010       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:28:09.564072       1 main.go:299] handling current node
	I0815 23:28:09.564121       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:28:09.564148       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:28:09.564379       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:28:09.564424       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:28:19.562804       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:28:19.562991       1 main.go:299] handling current node
	I0815 23:28:19.563090       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:28:19.563102       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:28:19.563359       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:28:19.563389       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:28:19.563485       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:28:19.563659       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:28:29.558760       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:28:29.558805       1 main.go:299] handling current node
	I0815 23:28:29.558861       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:28:29.558868       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:28:29.559020       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:28:29.559025       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:28:29.559075       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:28:29.559079       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8] <==
	I0815 23:21:25.012401       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0815 23:23:27.274010       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0815 23:23:27.274480       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 16.123µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0815 23:23:27.275895       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0815 23:23:27.277086       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0815 23:23:27.278403       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.508199ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0815 23:23:56.025489       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52068: use of closed network connection
	E0815 23:23:56.210192       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52100: use of closed network connection
	E0815 23:23:56.395640       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52118: use of closed network connection
	E0815 23:23:56.768921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52158: use of closed network connection
	E0815 23:23:56.954342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52188: use of closed network connection
	E0815 23:23:57.138477       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52212: use of closed network connection
	E0815 23:23:57.324754       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52228: use of closed network connection
	E0815 23:23:57.806212       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52276: use of closed network connection
	E0815 23:23:57.994354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52294: use of closed network connection
	E0815 23:23:58.181153       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52312: use of closed network connection
	E0815 23:23:58.376907       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52324: use of closed network connection
	E0815 23:23:58.555602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52346: use of closed network connection
	E0815 23:23:58.742430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52362: use of closed network connection
	E0815 23:24:30.699162       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0815 23:24:30.699644       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 4.04µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0815 23:24:30.700989       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0815 23:24:30.702285       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0815 23:24:30.703789       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.400637ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-175414-m04.17ec0a787012cea0" result=null
	W0815 23:25:19.216094       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.67]
	
	
	==> kube-controller-manager [0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0] <==
	I0815 23:24:30.566429       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-175414-m04" podCIDRs=["10.244.3.0/24"]
	I0815 23:24:30.566492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:30.566568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:30.831776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:30.917156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:31.478752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:31.495102       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:32.379626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:32.405826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:34.398632       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:34.399126       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-175414-m04"
	I0815 23:24:34.436307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:40.829948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:48.211783       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-175414-m04"
	I0815 23:24:48.211938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:48.232862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:24:49.417111       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:25:01.073483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:25:49.444841       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-175414-m04"
	I0815 23:25:49.445121       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	I0815 23:25:49.467896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	I0815 23:25:49.596675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.716986ms"
	I0815 23:25:49.596825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.556µs"
	I0815 23:25:51.492069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	I0815 23:25:54.695663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	
	
	==> kube-proxy [70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:21:26.437594       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:21:26.454428       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E0815 23:21:26.454560       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:21:26.497573       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:21:26.497603       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:21:26.497632       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:21:26.500608       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:21:26.501148       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:21:26.501211       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:21:26.503006       1 config.go:197] "Starting service config controller"
	I0815 23:21:26.503068       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:21:26.503113       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:21:26.503130       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:21:26.507024       1 config.go:326] "Starting node config controller"
	I0815 23:21:26.507056       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:21:26.604288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:21:26.604396       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:21:26.607175       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755] <==
	W0815 23:21:18.828357       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 23:21:18.828571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0815 23:21:21.801578       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:23:51.808213       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kt8v4\": pod busybox-7dff88458-kt8v4 is already assigned to node \"ha-175414-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kt8v4" node="ha-175414-m02"
	E0815 23:23:51.817460       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4f5d9ce8-0a98-4378-bc08-df90c934314a(default/busybox-7dff88458-kt8v4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-kt8v4"
	E0815 23:23:51.817514       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kt8v4\": pod busybox-7dff88458-kt8v4 is already assigned to node \"ha-175414-m02\"" pod="default/busybox-7dff88458-kt8v4"
	I0815 23:23:51.817561       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-kt8v4" node="ha-175414-m02"
	E0815 23:23:51.817338       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ztvms\": pod busybox-7dff88458-ztvms is already assigned to node \"ha-175414\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ztvms" node="ha-175414"
	E0815 23:23:51.818669       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ztvms\": pod busybox-7dff88458-ztvms is already assigned to node \"ha-175414\"" pod="default/busybox-7dff88458-ztvms"
	E0815 23:24:31.002905       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lw2tv\": pod kube-proxy-lw2tv is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lw2tv" node="ha-175414-m04"
	E0815 23:24:31.003009       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6591e4e0-ab34-481c-b826-bd56fa0ef01b(kube-system/kube-proxy-lw2tv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lw2tv"
	E0815 23:24:31.003032       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lw2tv\": pod kube-proxy-lw2tv is already assigned to node \"ha-175414-m04\"" pod="kube-system/kube-proxy-lw2tv"
	I0815 23:24:31.003065       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lw2tv" node="ha-175414-m04"
	E0815 23:24:31.009629       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m6wl5\": pod kindnet-m6wl5 is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m6wl5" node="ha-175414-m04"
	E0815 23:24:31.009730       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod efa64311-983a-46d2-88b4-306fc316f564(kube-system/kindnet-m6wl5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-m6wl5"
	E0815 23:24:31.009767       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m6wl5\": pod kindnet-m6wl5 is already assigned to node \"ha-175414-m04\"" pod="kube-system/kindnet-m6wl5"
	I0815 23:24:31.009797       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m6wl5" node="ha-175414-m04"
	E0815 23:24:31.089615       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w68mv\": pod kube-proxy-w68mv is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w68mv" node="ha-175414-m04"
	E0815 23:24:31.093322       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8dece2a7-e846-45c9-81a2-a5766b3e2a59(kube-system/kube-proxy-w68mv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w68mv"
	E0815 23:24:31.093536       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w68mv\": pod kube-proxy-w68mv is already assigned to node \"ha-175414-m04\"" pod="kube-system/kube-proxy-w68mv"
	I0815 23:24:31.093743       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w68mv" node="ha-175414-m04"
	E0815 23:24:31.092964       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-442dg\": pod kindnet-442dg is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-442dg" node="ha-175414-m04"
	E0815 23:24:31.099497       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a7abeee9-7619-4535-9654-3a395026f469(kube-system/kindnet-442dg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-442dg"
	E0815 23:24:31.099565       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-442dg\": pod kindnet-442dg is already assigned to node \"ha-175414-m04\"" pod="kube-system/kindnet-442dg"
	I0815 23:24:31.099706       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-442dg" node="ha-175414-m04"
	
	
	==> kubelet <==
	Aug 15 23:27:20 ha-175414 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:27:20 ha-175414 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:27:20 ha-175414 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:27:20 ha-175414 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:27:20 ha-175414 kubelet[1322]: E0815 23:27:20.999908    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764440999195309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:20 ha-175414 kubelet[1322]: E0815 23:27:20.999949    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764440999195309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:31 ha-175414 kubelet[1322]: E0815 23:27:31.002799    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764451002069047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:31 ha-175414 kubelet[1322]: E0815 23:27:31.003193    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764451002069047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:41 ha-175414 kubelet[1322]: E0815 23:27:41.004940    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764461004488239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:41 ha-175414 kubelet[1322]: E0815 23:27:41.005762    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764461004488239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:51 ha-175414 kubelet[1322]: E0815 23:27:51.008660    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764471007884266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:27:51 ha-175414 kubelet[1322]: E0815 23:27:51.008722    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764471007884266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:28:01 ha-175414 kubelet[1322]: E0815 23:28:01.011168    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764481010286270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:28:01 ha-175414 kubelet[1322]: E0815 23:28:01.011808    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764481010286270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:28:11 ha-175414 kubelet[1322]: E0815 23:28:11.014448    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764491013677132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:28:11 ha-175414 kubelet[1322]: E0815 23:28:11.014892    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764491013677132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:28:20 ha-175414 kubelet[1322]: E0815 23:28:20.858825    1322 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:28:20 ha-175414 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:28:20 ha-175414 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:28:20 ha-175414 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:28:20 ha-175414 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:28:21 ha-175414 kubelet[1322]: E0815 23:28:21.017671    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764501016933246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:28:21 ha-175414 kubelet[1322]: E0815 23:28:21.017714    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764501016933246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:28:31 ha-175414 kubelet[1322]: E0815 23:28:31.019074    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764511018663831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:28:31 ha-175414 kubelet[1322]: E0815 23:28:31.019099    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764511018663831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-175414 -n ha-175414
helpers_test.go:261: (dbg) Run:  kubectl --context ha-175414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-175414 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-175414 -v=7 --alsologtostderr
E0815 23:29:53.799240   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:30:21.501764   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-175414 -v=7 --alsologtostderr: exit status 82 (2m1.905522312s)

                                                
                                                
-- stdout --
	* Stopping node "ha-175414-m04"  ...
	* Stopping node "ha-175414-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:28:32.735693   36484 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:28:32.735824   36484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:28:32.735834   36484 out.go:358] Setting ErrFile to fd 2...
	I0815 23:28:32.735839   36484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:28:32.736060   36484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:28:32.736317   36484 out.go:352] Setting JSON to false
	I0815 23:28:32.736424   36484 mustload.go:65] Loading cluster: ha-175414
	I0815 23:28:32.736842   36484 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:28:32.736942   36484 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:28:32.737136   36484 mustload.go:65] Loading cluster: ha-175414
	I0815 23:28:32.737312   36484 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:28:32.737364   36484 stop.go:39] StopHost: ha-175414-m04
	I0815 23:28:32.737831   36484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:32.737897   36484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:32.753944   36484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0815 23:28:32.754376   36484 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:32.754914   36484 main.go:141] libmachine: Using API Version  1
	I0815 23:28:32.754942   36484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:32.755293   36484 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:32.757576   36484 out.go:177] * Stopping node "ha-175414-m04"  ...
	I0815 23:28:32.759013   36484 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 23:28:32.759039   36484 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:28:32.759270   36484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 23:28:32.759292   36484 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:28:32.761980   36484 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:32.762441   36484 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:24:14 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:28:32.762466   36484 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:28:32.762594   36484 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:28:32.762781   36484 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:28:32.762908   36484 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:28:32.763081   36484 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:28:32.849797   36484 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 23:28:32.905284   36484 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 23:28:32.959853   36484 main.go:141] libmachine: Stopping "ha-175414-m04"...
	I0815 23:28:32.959903   36484 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:28:32.961249   36484 main.go:141] libmachine: (ha-175414-m04) Calling .Stop
	I0815 23:28:32.964656   36484 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 0/120
	I0815 23:28:34.188092   36484 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:28:34.189294   36484 main.go:141] libmachine: Machine "ha-175414-m04" was stopped.
	I0815 23:28:34.189314   36484 stop.go:75] duration metric: took 1.430307341s to stop
	I0815 23:28:34.189347   36484 stop.go:39] StopHost: ha-175414-m03
	I0815 23:28:34.189740   36484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:28:34.189780   36484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:28:34.204399   36484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I0815 23:28:34.204770   36484 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:28:34.205212   36484 main.go:141] libmachine: Using API Version  1
	I0815 23:28:34.205235   36484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:28:34.205529   36484 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:28:34.207403   36484 out.go:177] * Stopping node "ha-175414-m03"  ...
	I0815 23:28:34.208505   36484 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 23:28:34.208535   36484 main.go:141] libmachine: (ha-175414-m03) Calling .DriverName
	I0815 23:28:34.208751   36484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 23:28:34.208772   36484 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHHostname
	I0815 23:28:34.211534   36484 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:34.211957   36484 main.go:141] libmachine: (ha-175414-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:81:69", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:22:52 +0000 UTC Type:0 Mac:52:54:00:bc:81:69 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-175414-m03 Clientid:01:52:54:00:bc:81:69}
	I0815 23:28:34.211986   36484 main.go:141] libmachine: (ha-175414-m03) DBG | domain ha-175414-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:81:69 in network mk-ha-175414
	I0815 23:28:34.212199   36484 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHPort
	I0815 23:28:34.212371   36484 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHKeyPath
	I0815 23:28:34.212543   36484 main.go:141] libmachine: (ha-175414-m03) Calling .GetSSHUsername
	I0815 23:28:34.212665   36484 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m03/id_rsa Username:docker}
	I0815 23:28:34.294480   36484 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 23:28:34.348955   36484 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 23:28:34.404087   36484 main.go:141] libmachine: Stopping "ha-175414-m03"...
	I0815 23:28:34.404120   36484 main.go:141] libmachine: (ha-175414-m03) Calling .GetState
	I0815 23:28:34.405706   36484 main.go:141] libmachine: (ha-175414-m03) Calling .Stop
	I0815 23:28:34.408987   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 0/120
	I0815 23:28:35.410282   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 1/120
	I0815 23:28:36.411597   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 2/120
	I0815 23:28:37.413124   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 3/120
	I0815 23:28:38.414693   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 4/120
	I0815 23:28:39.416726   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 5/120
	I0815 23:28:40.418055   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 6/120
	I0815 23:28:41.419576   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 7/120
	I0815 23:28:42.420840   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 8/120
	I0815 23:28:43.422487   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 9/120
	I0815 23:28:44.424839   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 10/120
	I0815 23:28:45.426239   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 11/120
	I0815 23:28:46.427730   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 12/120
	I0815 23:28:47.429049   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 13/120
	I0815 23:28:48.430510   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 14/120
	I0815 23:28:49.432792   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 15/120
	I0815 23:28:50.434140   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 16/120
	I0815 23:28:51.435520   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 17/120
	I0815 23:28:52.437117   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 18/120
	I0815 23:28:53.438467   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 19/120
	I0815 23:28:54.440422   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 20/120
	I0815 23:28:55.441924   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 21/120
	I0815 23:28:56.443196   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 22/120
	I0815 23:28:57.444687   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 23/120
	I0815 23:28:58.445884   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 24/120
	I0815 23:28:59.447411   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 25/120
	I0815 23:29:00.448904   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 26/120
	I0815 23:29:01.450178   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 27/120
	I0815 23:29:02.451758   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 28/120
	I0815 23:29:03.452925   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 29/120
	I0815 23:29:04.454379   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 30/120
	I0815 23:29:05.455687   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 31/120
	I0815 23:29:06.456953   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 32/120
	I0815 23:29:07.458569   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 33/120
	I0815 23:29:08.459758   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 34/120
	I0815 23:29:09.461294   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 35/120
	I0815 23:29:10.462640   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 36/120
	I0815 23:29:11.464194   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 37/120
	I0815 23:29:12.465600   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 38/120
	I0815 23:29:13.466907   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 39/120
	I0815 23:29:14.468562   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 40/120
	I0815 23:29:15.469774   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 41/120
	I0815 23:29:16.470976   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 42/120
	I0815 23:29:17.472681   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 43/120
	I0815 23:29:18.473864   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 44/120
	I0815 23:29:19.475584   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 45/120
	I0815 23:29:20.477037   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 46/120
	I0815 23:29:21.478799   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 47/120
	I0815 23:29:22.480187   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 48/120
	I0815 23:29:23.481475   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 49/120
	I0815 23:29:24.482897   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 50/120
	I0815 23:29:25.484345   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 51/120
	I0815 23:29:26.486534   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 52/120
	I0815 23:29:27.487732   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 53/120
	I0815 23:29:28.489318   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 54/120
	I0815 23:29:29.491200   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 55/120
	I0815 23:29:30.493065   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 56/120
	I0815 23:29:31.494717   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 57/120
	I0815 23:29:32.496056   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 58/120
	I0815 23:29:33.497516   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 59/120
	I0815 23:29:34.499592   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 60/120
	I0815 23:29:35.502101   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 61/120
	I0815 23:29:36.503755   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 62/120
	I0815 23:29:37.505836   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 63/120
	I0815 23:29:38.507475   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 64/120
	I0815 23:29:39.509288   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 65/120
	I0815 23:29:40.510863   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 66/120
	I0815 23:29:41.512389   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 67/120
	I0815 23:29:42.514083   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 68/120
	I0815 23:29:43.515469   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 69/120
	I0815 23:29:44.516890   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 70/120
	I0815 23:29:45.518425   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 71/120
	I0815 23:29:46.519751   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 72/120
	I0815 23:29:47.521199   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 73/120
	I0815 23:29:48.522502   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 74/120
	I0815 23:29:49.523973   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 75/120
	I0815 23:29:50.525344   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 76/120
	I0815 23:29:51.526963   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 77/120
	I0815 23:29:52.528399   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 78/120
	I0815 23:29:53.529662   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 79/120
	I0815 23:29:54.531576   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 80/120
	I0815 23:29:55.532963   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 81/120
	I0815 23:29:56.534387   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 82/120
	I0815 23:29:57.535629   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 83/120
	I0815 23:29:58.536897   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 84/120
	I0815 23:29:59.538765   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 85/120
	I0815 23:30:00.540156   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 86/120
	I0815 23:30:01.541537   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 87/120
	I0815 23:30:02.542789   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 88/120
	I0815 23:30:03.544236   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 89/120
	I0815 23:30:04.546149   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 90/120
	I0815 23:30:05.547480   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 91/120
	I0815 23:30:06.549075   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 92/120
	I0815 23:30:07.550461   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 93/120
	I0815 23:30:08.551997   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 94/120
	I0815 23:30:09.553413   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 95/120
	I0815 23:30:10.554821   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 96/120
	I0815 23:30:11.556172   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 97/120
	I0815 23:30:12.557366   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 98/120
	I0815 23:30:13.558692   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 99/120
	I0815 23:30:14.560336   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 100/120
	I0815 23:30:15.561631   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 101/120
	I0815 23:30:16.563088   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 102/120
	I0815 23:30:17.564390   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 103/120
	I0815 23:30:18.565686   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 104/120
	I0815 23:30:19.568041   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 105/120
	I0815 23:30:20.569403   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 106/120
	I0815 23:30:21.570736   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 107/120
	I0815 23:30:22.572282   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 108/120
	I0815 23:30:23.573702   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 109/120
	I0815 23:30:24.575433   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 110/120
	I0815 23:30:25.576943   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 111/120
	I0815 23:30:26.578379   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 112/120
	I0815 23:30:27.580344   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 113/120
	I0815 23:30:28.581724   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 114/120
	I0815 23:30:29.583337   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 115/120
	I0815 23:30:30.584806   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 116/120
	I0815 23:30:31.586285   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 117/120
	I0815 23:30:32.588383   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 118/120
	I0815 23:30:33.590062   36484 main.go:141] libmachine: (ha-175414-m03) Waiting for machine to stop 119/120
	I0815 23:30:34.590878   36484 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 23:30:34.590951   36484 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 23:30:34.593070   36484 out.go:201] 
	W0815 23:30:34.594333   36484 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 23:30:34.594351   36484 out.go:270] * 
	* 
	W0815 23:30:34.597157   36484 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 23:30:34.598778   36484 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-175414 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-175414 --wait=true -v=7 --alsologtostderr
E0815 23:32:51.160248   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:34:14.229009   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-175414 --wait=true -v=7 --alsologtostderr: (4m14.690484366s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-175414
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-175414 -n ha-175414
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-175414 logs -n 25: (1.790569609s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m02:/home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m02 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04:/home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m04 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp testdata/cp-test.txt                                               | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414:/home/docker/cp-test_ha-175414-m04_ha-175414.txt                      |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414 sudo cat                                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414.txt                                |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m02:/home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m02 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03:/home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m03 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-175414 node stop m02 -v=7                                                    | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-175414 node start m02 -v=7                                                   | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:27 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-175414 -v=7                                                          | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-175414 -v=7                                                               | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-175414 --wait=true -v=7                                                   | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:30 UTC | 15 Aug 24 23:34 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-175414                                                               | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:34 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:30:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:30:34.642752   36963 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:30:34.642880   36963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:30:34.642890   36963 out.go:358] Setting ErrFile to fd 2...
	I0815 23:30:34.642896   36963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:30:34.643108   36963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:30:34.644159   36963 out.go:352] Setting JSON to false
	I0815 23:30:34.645446   36963 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4335,"bootTime":1723760300,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:30:34.645516   36963 start.go:139] virtualization: kvm guest
	I0815 23:30:34.647349   36963 out.go:177] * [ha-175414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:30:34.649060   36963 notify.go:220] Checking for updates...
	I0815 23:30:34.649072   36963 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:30:34.650519   36963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:30:34.651631   36963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:30:34.652723   36963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:30:34.653920   36963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:30:34.655131   36963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:30:34.656847   36963 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:30:34.656957   36963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:30:34.657396   36963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:30:34.657436   36963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:30:34.673264   36963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0815 23:30:34.673746   36963 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:30:34.674352   36963 main.go:141] libmachine: Using API Version  1
	I0815 23:30:34.674371   36963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:30:34.674732   36963 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:30:34.674973   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:30:34.711109   36963 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 23:30:34.712353   36963 start.go:297] selected driver: kvm2
	I0815 23:30:34.712376   36963 start.go:901] validating driver "kvm2" against &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.32 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:30:34.712582   36963 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:30:34.712934   36963 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:30:34.713012   36963 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:30:34.727574   36963 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:30:34.728246   36963 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:30:34.728314   36963 cni.go:84] Creating CNI manager for ""
	I0815 23:30:34.728329   36963 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 23:30:34.728396   36963 start.go:340] cluster config:
	{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.32 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:30:34.728554   36963 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:30:34.730537   36963 out.go:177] * Starting "ha-175414" primary control-plane node in "ha-175414" cluster
	I0815 23:30:34.731739   36963 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:30:34.731776   36963 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:30:34.731783   36963 cache.go:56] Caching tarball of preloaded images
	I0815 23:30:34.731866   36963 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:30:34.731882   36963 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:30:34.731995   36963 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:30:34.732216   36963 start.go:360] acquireMachinesLock for ha-175414: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:30:34.732278   36963 start.go:364] duration metric: took 36.827µs to acquireMachinesLock for "ha-175414"
	I0815 23:30:34.732306   36963 start.go:96] Skipping create...Using existing machine configuration
	I0815 23:30:34.732318   36963 fix.go:54] fixHost starting: 
	I0815 23:30:34.732562   36963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:30:34.732590   36963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:30:34.748338   36963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0815 23:30:34.748768   36963 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:30:34.749202   36963 main.go:141] libmachine: Using API Version  1
	I0815 23:30:34.749222   36963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:30:34.749532   36963 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:30:34.749723   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:30:34.749908   36963 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:30:34.751646   36963 fix.go:112] recreateIfNeeded on ha-175414: state=Running err=<nil>
	W0815 23:30:34.751663   36963 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 23:30:34.753637   36963 out.go:177] * Updating the running kvm2 "ha-175414" VM ...
	I0815 23:30:34.754817   36963 machine.go:93] provisionDockerMachine start ...
	I0815 23:30:34.754838   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:30:34.755044   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:34.757515   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:34.757974   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:34.758012   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:34.758140   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:34.758293   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:34.758437   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:34.758581   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:34.758720   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:30:34.758947   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:30:34.758965   36963 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 23:30:34.874050   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414
	
	I0815 23:30:34.874090   36963 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:30:34.874332   36963 buildroot.go:166] provisioning hostname "ha-175414"
	I0815 23:30:34.874372   36963 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:30:34.874606   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:34.877072   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:34.877433   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:34.877460   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:34.877592   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:34.877739   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:34.877905   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:34.878051   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:34.878216   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:30:34.878393   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:30:34.878406   36963 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-175414 && echo "ha-175414" | sudo tee /etc/hostname
	I0815 23:30:35.004425   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414
	
	I0815 23:30:35.004446   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:35.007473   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.007837   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.007863   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.008040   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:35.008194   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.008322   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.008408   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:35.008533   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:30:35.008730   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:30:35.008754   36963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-175414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-175414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-175414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:30:35.123123   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:30:35.123161   36963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:30:35.123204   36963 buildroot.go:174] setting up certificates
	I0815 23:30:35.123223   36963 provision.go:84] configureAuth start
	I0815 23:30:35.123233   36963 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:30:35.123488   36963 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:30:35.126121   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.126506   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.126534   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.126685   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:35.129150   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.129489   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.129515   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.129732   36963 provision.go:143] copyHostCerts
	I0815 23:30:35.129795   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:30:35.129858   36963 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:30:35.129881   36963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:30:35.129966   36963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:30:35.130083   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:30:35.130107   36963 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:30:35.130115   36963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:30:35.130153   36963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:30:35.130229   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:30:35.130252   36963 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:30:35.130261   36963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:30:35.130290   36963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:30:35.130384   36963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.ha-175414 san=[127.0.0.1 192.168.39.67 ha-175414 localhost minikube]
	I0815 23:30:35.447331   36963 provision.go:177] copyRemoteCerts
	I0815 23:30:35.447380   36963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:30:35.447403   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:35.449888   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.450205   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.450230   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.450434   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:35.450620   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.450771   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:35.450900   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:30:35.536921   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:30:35.537020   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:30:35.564904   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:30:35.565000   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0815 23:30:35.593454   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:30:35.593532   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 23:30:35.620556   36963 provision.go:87] duration metric: took 497.31969ms to configureAuth
	I0815 23:30:35.620590   36963 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:30:35.620831   36963 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:30:35.620928   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:35.623626   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.624030   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.624063   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.624243   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:35.624435   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.624635   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.624770   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:35.624954   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:30:35.625149   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:30:35.625170   36963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:32:06.386399   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:32:06.386431   36963 machine.go:96] duration metric: took 1m31.6315979s to provisionDockerMachine
	I0815 23:32:06.386447   36963 start.go:293] postStartSetup for "ha-175414" (driver="kvm2")
	I0815 23:32:06.386462   36963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:32:06.386483   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.386827   36963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:32:06.386859   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.390005   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.390379   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.390401   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.390579   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.390754   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.390941   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.391077   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:32:06.478027   36963 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:32:06.482432   36963 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:32:06.482464   36963 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:32:06.482535   36963 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:32:06.482640   36963 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:32:06.482653   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:32:06.482755   36963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:32:06.492779   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:32:06.516876   36963 start.go:296] duration metric: took 130.414074ms for postStartSetup
	I0815 23:32:06.516914   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.517200   36963 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 23:32:06.517223   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.519766   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.520222   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.520250   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.520377   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.520592   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.520748   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.520886   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	W0815 23:32:06.604520   36963 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0815 23:32:06.604552   36963 fix.go:56] duration metric: took 1m31.872235233s for fixHost
	I0815 23:32:06.604578   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.607164   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.607491   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.607526   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.607680   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.607875   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.608011   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.608112   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.608237   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:32:06.608450   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:32:06.608464   36963 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:32:06.718720   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764726.686038597
	
	I0815 23:32:06.718741   36963 fix.go:216] guest clock: 1723764726.686038597
	I0815 23:32:06.718752   36963 fix.go:229] Guest: 2024-08-15 23:32:06.686038597 +0000 UTC Remote: 2024-08-15 23:32:06.604561002 +0000 UTC m=+91.996716584 (delta=81.477595ms)
	I0815 23:32:06.718791   36963 fix.go:200] guest clock delta is within tolerance: 81.477595ms
	I0815 23:32:06.718798   36963 start.go:83] releasing machines lock for "ha-175414", held for 1m31.986499668s
	I0815 23:32:06.718838   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.719111   36963 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:32:06.721494   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.721835   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.721876   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.722070   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.722609   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.722790   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.722886   36963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:32:06.722923   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.723024   36963 ssh_runner.go:195] Run: cat /version.json
	I0815 23:32:06.723051   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.725665   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.725767   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.726031   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.726059   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.726200   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.726208   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.726258   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.726342   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.726405   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.726534   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.726599   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.726750   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.726756   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:32:06.726879   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:32:06.829038   36963 ssh_runner.go:195] Run: systemctl --version
	I0815 23:32:06.835465   36963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:32:07.000226   36963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:32:07.007179   36963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:32:07.007251   36963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:32:07.016736   36963 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 23:32:07.016761   36963 start.go:495] detecting cgroup driver to use...
	I0815 23:32:07.016825   36963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:32:07.033938   36963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:32:07.048275   36963 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:32:07.048337   36963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:32:07.062197   36963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:32:07.075852   36963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:32:07.231043   36963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:32:07.418512   36963 docker.go:233] disabling docker service ...
	I0815 23:32:07.418573   36963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:32:07.470311   36963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:32:07.499546   36963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:32:07.667244   36963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:32:07.823075   36963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:32:07.837867   36963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:32:07.857204   36963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:32:07.857269   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.868611   36963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:32:07.868671   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.879608   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.890478   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.901456   36963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:32:07.912582   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.923869   36963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.935473   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.946916   36963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:32:07.956914   36963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:32:07.967164   36963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:32:08.134497   36963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:32:17.908997   36963 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.774466167s)
	I0815 23:32:17.909034   36963 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:32:17.909089   36963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:32:17.915543   36963 start.go:563] Will wait 60s for crictl version
	I0815 23:32:17.915604   36963 ssh_runner.go:195] Run: which crictl
	I0815 23:32:17.920068   36963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:32:17.958753   36963 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:32:17.958827   36963 ssh_runner.go:195] Run: crio --version
	I0815 23:32:17.988670   36963 ssh_runner.go:195] Run: crio --version
	I0815 23:32:18.024173   36963 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:32:18.025405   36963 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:32:18.027801   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:18.028125   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:18.028147   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:18.028340   36963 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:32:18.033587   36963 kubeadm.go:883] updating cluster {Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.32 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 23:32:18.033708   36963 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:32:18.033744   36963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:32:18.088265   36963 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:32:18.088287   36963 crio.go:433] Images already preloaded, skipping extraction
	I0815 23:32:18.088338   36963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:32:18.125576   36963 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:32:18.125599   36963 cache_images.go:84] Images are preloaded, skipping loading
	I0815 23:32:18.125606   36963 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.0 crio true true} ...
	I0815 23:32:18.125719   36963 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-175414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:32:18.125789   36963 ssh_runner.go:195] Run: crio config
	I0815 23:32:18.174872   36963 cni.go:84] Creating CNI manager for ""
	I0815 23:32:18.174888   36963 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 23:32:18.174897   36963 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 23:32:18.174921   36963 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-175414 NodeName:ha-175414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 23:32:18.175055   36963 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-175414"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 23:32:18.175074   36963 kube-vip.go:115] generating kube-vip config ...
	I0815 23:32:18.175112   36963 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 23:32:18.186777   36963 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 23:32:18.186895   36963 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 23:32:18.186957   36963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:32:18.196674   36963 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 23:32:18.196734   36963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 23:32:18.206989   36963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0815 23:32:18.224416   36963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:32:18.242040   36963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0815 23:32:18.259472   36963 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 23:32:18.277958   36963 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 23:32:18.281960   36963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:32:18.438386   36963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:32:18.453156   36963 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414 for IP: 192.168.39.67
	I0815 23:32:18.453182   36963 certs.go:194] generating shared ca certs ...
	I0815 23:32:18.453203   36963 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:32:18.453386   36963 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:32:18.453447   36963 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:32:18.453463   36963 certs.go:256] generating profile certs ...
	I0815 23:32:18.453584   36963 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key
	I0815 23:32:18.453624   36963 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.40510575
	I0815 23:32:18.453651   36963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.40510575 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.19 192.168.39.100 192.168.39.254]
	I0815 23:32:18.622827   36963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.40510575 ...
	I0815 23:32:18.622856   36963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.40510575: {Name:mkeb549781490d3c87bc4f21e245a8f5b0f891cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:32:18.623061   36963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.40510575 ...
	I0815 23:32:18.623076   36963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.40510575: {Name:mkdea78273ad07797106df7f96e935f9a1aaa6ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:32:18.623175   36963 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.40510575 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt
	I0815 23:32:18.623347   36963 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.40510575 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key
	I0815 23:32:18.623473   36963 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key
	I0815 23:32:18.623488   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:32:18.623500   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:32:18.623513   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:32:18.623527   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:32:18.623540   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:32:18.623553   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:32:18.623567   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:32:18.623579   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:32:18.623629   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:32:18.623655   36963 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:32:18.623665   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:32:18.623685   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:32:18.623704   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:32:18.623726   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:32:18.623772   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:32:18.623803   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:32:18.623817   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:32:18.623831   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:32:18.624389   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:32:18.651337   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:32:18.675919   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:32:18.700888   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:32:18.724935   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 23:32:18.748899   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 23:32:18.773777   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:32:18.798899   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:32:18.824206   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:32:18.853444   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:32:18.882367   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:32:18.910136   36963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 23:32:18.929146   36963 ssh_runner.go:195] Run: openssl version
	I0815 23:32:18.935934   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:32:18.947225   36963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:32:18.951968   36963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:32:18.952022   36963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:32:18.957858   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:32:18.967678   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:32:18.978633   36963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:32:18.983247   36963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:32:18.983300   36963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:32:18.989012   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:32:18.998524   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:32:19.009833   36963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:32:19.014485   36963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:32:19.014534   36963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:32:19.020120   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:32:19.029697   36963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:32:19.034636   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 23:32:19.040323   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 23:32:19.046077   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 23:32:19.051724   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 23:32:19.058061   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 23:32:19.063978   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 23:32:19.070033   36963 kubeadm.go:392] StartCluster: {Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.32 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:32:19.070136   36963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 23:32:19.070173   36963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 23:32:19.109474   36963 cri.go:89] found id: "453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99"
	I0815 23:32:19.109493   36963 cri.go:89] found id: "8ff057f6573bd4d735de692c58a6a38952a75f3f18bc080cc400737049a6e7da"
	I0815 23:32:19.109497   36963 cri.go:89] found id: "5be37cafbe7f3c97cd0ffe329036589d4a99bdd61f07075c5cec580dc4f0f678"
	I0815 23:32:19.109500   36963 cri.go:89] found id: "61a664a258c6badb719a5d06b0dddbb21dabcd05c5104e75aa2f6ba91e819d98"
	I0815 23:32:19.109502   36963 cri.go:89] found id: "d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93"
	I0815 23:32:19.109505   36963 cri.go:89] found id: "6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64"
	I0815 23:32:19.109508   36963 cri.go:89] found id: "fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e"
	I0815 23:32:19.109510   36963 cri.go:89] found id: "dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a"
	I0815 23:32:19.109512   36963 cri.go:89] found id: "70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137"
	I0815 23:32:19.109519   36963 cri.go:89] found id: "41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55"
	I0815 23:32:19.109521   36963 cri.go:89] found id: "aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391"
	I0815 23:32:19.109534   36963 cri.go:89] found id: "af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755"
	I0815 23:32:19.109537   36963 cri.go:89] found id: "b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8"
	I0815 23:32:19.109539   36963 cri.go:89] found id: "0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0"
	I0815 23:32:19.109543   36963 cri.go:89] found id: ""
	I0815 23:32:19.109583   36963 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 23:34:49 ha-175414 crio[3699]: time="2024-08-15 23:34:49.986360935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764889986329951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c9645b4-ed65-4228-93b8-679f0eb29a25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:34:49 ha-175414 crio[3699]: time="2024-08-15 23:34:49.987139795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11519f2f-c2b8-4dc0-ade5-294496601aa8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:49 ha-175414 crio[3699]: time="2024-08-15 23:34:49.987219137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11519f2f-c2b8-4dc0-ade5-294496601aa8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:49 ha-175414 crio[3699]: time="2024-08-15 23:34:49.987695775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723764778856533533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
8d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa
28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723764743660487260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60f
f5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723764743589026191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99,PodSandboxId:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764727427096623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kub
ernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723764234620157579,Labels:map[string]stri
ng{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764100474788152,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723764088513493764,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723764086149001826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723764074424409898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723764074344897958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11519f2f-c2b8-4dc0-ade5-294496601aa8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.036235548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f59ab4fa-d0b4-4be9-a6e7-e085aa132e30 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.036460322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f59ab4fa-d0b4-4be9-a6e7-e085aa132e30 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.037967977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45501830-6111-4c57-965e-d5601acf21d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.038522375Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764890038495270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45501830-6111-4c57-965e-d5601acf21d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.039009594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7f3175b-6d52-4066-aa4d-cee6d2543049 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.039086855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7f3175b-6d52-4066-aa4d-cee6d2543049 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.039674678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723764778856533533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
8d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa
28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723764743660487260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60f
f5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723764743589026191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99,PodSandboxId:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764727427096623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kub
ernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723764234620157579,Labels:map[string]stri
ng{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764100474788152,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723764088513493764,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723764086149001826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723764074424409898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723764074344897958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7f3175b-6d52-4066-aa4d-cee6d2543049 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.086890059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=053d1280-4160-44db-aed9-18cacff6b1f6 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.086991153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=053d1280-4160-44db-aed9-18cacff6b1f6 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.088128615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad819c06-5879-49ea-a517-c8f36ee5f673 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.088779016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764890088754079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad819c06-5879-49ea-a517-c8f36ee5f673 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.089379609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87676135-9f6f-4f7b-ad87-8ffdf7c406ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.089456551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87676135-9f6f-4f7b-ad87-8ffdf7c406ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.089859398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723764778856533533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
8d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa
28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723764743660487260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60f
f5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723764743589026191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99,PodSandboxId:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764727427096623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kub
ernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723764234620157579,Labels:map[string]stri
ng{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764100474788152,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723764088513493764,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723764086149001826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723764074424409898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723764074344897958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87676135-9f6f-4f7b-ad87-8ffdf7c406ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.139536698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d05c59c3-7f91-43c8-af87-3ab134025f94 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.139616975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d05c59c3-7f91-43c8-af87-3ab134025f94 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.140639652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b8a539a-8ee0-4709-8a40-2ffc99ed2b4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.141086998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764890141056369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b8a539a-8ee0-4709-8a40-2ffc99ed2b4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.141714028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c8e7517-aa87-420a-a6d7-01f3463612ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.141775278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c8e7517-aa87-420a-a6d7-01f3463612ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:34:50 ha-175414 crio[3699]: time="2024-08-15 23:34:50.142228553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723764778856533533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
8d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa
28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723764743660487260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60f
f5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723764743589026191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99,PodSandboxId:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764727427096623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kub
ernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723764234620157579,Labels:map[string]stri
ng{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764100474788152,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723764088513493764,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723764086149001826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723764074424409898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723764074344897958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c8e7517-aa87-420a-a6d7-01f3463612ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	91be7363b3925       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       4                   9aa34875f76cf       storage-provisioner
	3db7adbcee13c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   2                   daa8c968b6f12       kube-controller-manager-ha-175414
	82da16254ec56       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            3                   30a091962cf5c       kube-apiserver-ha-175414
	31267b4871934       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   9aa34875f76cf       storage-provisioner
	e2b5e61456c82       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   3782c37a72b34       busybox-7dff88458-ztvms
	09cf1043a0abe       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   d66c19a5c116d       kube-vip-ha-175414
	0a0b43b81fbcb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   2                   6c5918c0042cb       coredns-6f6b679f8f-zrv4c
	602292b2cbfa5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   e4716878078ff       kube-proxy-4frcn
	7ff4093cdbbdd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   3ba3e04d84149       kindnet-jjcdm
	a08812575d2b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   7fa869b54d0fc       coredns-6f6b679f8f-vkm5s
	34369a9e60b2d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   1f58063048db7       etcd-ha-175414
	55966e7435723       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   3359df4c20b28       kube-scheduler-ha-175414
	f8c3019e323c6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   daa8c968b6f12       kube-controller-manager-ha-175414
	e1edfb586686e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   30a091962cf5c       kube-apiserver-ha-175414
	453ec763ed5d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Exited              coredns                   1                   0e7cbb8b2f807       coredns-6f6b679f8f-zrv4c
	e6f2ac1a3791a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   1555ba5313b4a       busybox-7dff88458-ztvms
	d266fdeedd2d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   33df4c1e88a57       coredns-6f6b679f8f-vkm5s
	dce83cbb20557       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   1392391da1090       kindnet-jjcdm
	70eb25dbc5fac       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   51e2286f4b6df       kube-proxy-4frcn
	aaba7057e0920       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   94e761b5a2dbf       etcd-ha-175414
	af5abf6569d1f       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Exited              kube-scheduler            0                   6bc6e4c03eedb       kube-scheduler-ha-175414
	
	
	==> coredns [0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:48882->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:48882->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:48902->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:48902->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:32932 - 16878 "HINFO IN 2839216306064695090.8854576555639446388. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011032557s
	
	
	==> coredns [a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42502->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[741196442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:32:38.883) (total time: 10151ms):
	Trace[741196442]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42502->10.96.0.1:443: read: connection reset by peer 10151ms (23:32:49.034)
	Trace[741196442]: [10.151670777s] [10.151670777s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42502->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42512->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42512->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93] <==
	[INFO] 10.244.0.4:59435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073012s
	[INFO] 10.244.2.2:60026 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235829s
	[INFO] 10.244.2.2:58530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018432s
	[INFO] 10.244.1.2:44913 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119773s
	[INFO] 10.244.1.2:52756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123167s
	[INFO] 10.244.0.4:39480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124675s
	[INFO] 10.244.0.4:51365 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114789s
	[INFO] 10.244.0.4:49967 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068329s
	[INFO] 10.244.0.4:42637 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073642s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1900&timeout=8m53s&timeoutSeconds=533&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1900&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-175414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T23_21_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:33:09 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:33:09 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:33:09 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:33:09 +0000   Thu, 15 Aug 2024 23:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-175414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b0ddee9ca5943d7802a25ee6a9c7f34
	  System UUID:                7b0ddee9-ca59-43d7-802a-25ee6a9c7f34
	  Boot ID:                    a257efb5-ad21-419a-b259-592d48073d80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ztvms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-vkm5s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-zrv4c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-175414                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-jjcdm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-175414             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-175414    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-4frcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-175414             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-175414                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 102s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-175414 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-175414 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-175414 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-175414 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-175414 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-175414 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                    node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-175414 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Warning  ContainerGCFailed        3m30s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m30s (x4 over 3m44s)  kubelet          Node ha-175414 status is now: NodeNotReady
	  Normal   RegisteredNode           106s                   node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal   RegisteredNode           96s                    node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	
	
	Name:               ha-175414-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_22_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:22:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:33:51 +0000   Thu, 15 Aug 2024 23:33:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:33:51 +0000   Thu, 15 Aug 2024 23:33:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:33:51 +0000   Thu, 15 Aug 2024 23:33:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:33:51 +0000   Thu, 15 Aug 2024 23:33:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-175414-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e48881ea1334f28a03d47bf7b09ff84
	  System UUID:                1e48881e-a133-4f28-a03d-47bf7b09ff84
	  Boot ID:                    eec79460-aaa8-401d-a650-94c3fb86c560
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kt8v4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-175414-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-47nts                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-175414-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-175414-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-dcnmc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-175414-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-175414-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 81s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-175414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-175414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-175414-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  NodeNotReady             9m1s                 node-controller  Node ha-175414-m02 status is now: NodeNotReady
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node ha-175414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node ha-175414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node ha-175414-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                 node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           96s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           39s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	
	
	Name:               ha-175414-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_23_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:34:24 +0000   Thu, 15 Aug 2024 23:33:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:34:24 +0000   Thu, 15 Aug 2024 23:33:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:34:24 +0000   Thu, 15 Aug 2024 23:33:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:34:24 +0000   Thu, 15 Aug 2024 23:33:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-175414-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03cd54aa1c764ef1be98b373af236f27
	  System UUID:                03cd54aa-1c76-4ef1-be98-b373af236f27
	  Boot ID:                    6967c841-1368-4353-ad10-ec1ce064b042
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-glqlv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-175414-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-fp2gc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-175414-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-175414-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-qtps7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-175414-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-175414-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-175414-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-175414-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-175414-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal   RegisteredNode           106s               node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	  Normal   NodeNotReady             66s                node-controller  Node ha-175414-m03 status is now: NodeNotReady
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 57s                kubelet          Node ha-175414-m03 has been rebooted, boot id: 6967c841-1368-4353-ad10-ec1ce064b042
	  Normal   NodeHasSufficientMemory  57s (x2 over 57s)  kubelet          Node ha-175414-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x2 over 57s)  kubelet          Node ha-175414-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x2 over 57s)  kubelet          Node ha-175414-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                57s                kubelet          Node ha-175414-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-175414-m03 event: Registered Node ha-175414-m03 in Controller
	
	
	Name:               ha-175414-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_24_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:24:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:34:42 +0000   Thu, 15 Aug 2024 23:34:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:34:42 +0000   Thu, 15 Aug 2024 23:34:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:34:42 +0000   Thu, 15 Aug 2024 23:34:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:34:42 +0000   Thu, 15 Aug 2024 23:34:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    ha-175414-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4da843156b4c43e0a4311c72833aae78
	  System UUID:                4da84315-6b4c-43e0-a431-1c72833aae78
	  Boot ID:                    774e1017-4917-4afa-9c43-9b106cb79caa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6bf4q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-jm5fj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-175414-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-175414-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-175414-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-175414-m04 status is now: NodeReady
	  Normal   RegisteredNode           106s               node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   NodeNotReady             66s                node-controller  Node ha-175414-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-175414-m04 has been rebooted, boot id: 774e1017-4917-4afa-9c43-9b106cb79caa
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-175414-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-175414-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-175414-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8s                 kubelet          Node ha-175414-m04 status is now: NodeNotReady
	  Normal   NodeReady                8s                 kubelet          Node ha-175414-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056390] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050948] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.198639] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.119702] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.271672] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.126980] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.023155] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.059629] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.252555] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.087359] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.483452] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.149794] kauditd_printk_skb: 38 callbacks suppressed
	[Aug15 23:22] kauditd_printk_skb: 26 callbacks suppressed
	[Aug15 23:32] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	[  +0.163710] systemd-fstab-generator[3538]: Ignoring "noauto" option for root device
	[  +0.277025] systemd-fstab-generator[3633]: Ignoring "noauto" option for root device
	[  +0.145791] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.323149] systemd-fstab-generator[3684]: Ignoring "noauto" option for root device
	[ +10.307759] systemd-fstab-generator[3809]: Ignoring "noauto" option for root device
	[  +0.086388] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.037089] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.778546] kauditd_printk_skb: 73 callbacks suppressed
	[ +15.260805] kauditd_printk_skb: 5 callbacks suppressed
	[Aug15 23:33] kauditd_printk_skb: 5 callbacks suppressed
	[ +18.736514] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082] <==
	{"level":"warn","ts":"2024-08-15T23:33:48.250368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:33:48.350121Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:33:48.416591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:33:48.418681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ce564ad586a3115","from":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T23:33:48.427171Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.100:2380/version","remote-member-id":"a244d22cbced21a4","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:48.427330Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a244d22cbced21a4","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:49.750029Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a244d22cbced21a4","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:49.757221Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a244d22cbced21a4","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:52.428940Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.100:2380/version","remote-member-id":"a244d22cbced21a4","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:52.429116Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a244d22cbced21a4","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:54.751231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a244d22cbced21a4","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:54.757465Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a244d22cbced21a4","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:56.431185Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.100:2380/version","remote-member-id":"a244d22cbced21a4","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:56.431313Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a244d22cbced21a4","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:59.752299Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a244d22cbced21a4","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:33:59.758728Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a244d22cbced21a4","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:34:00.433536Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.100:2380/version","remote-member-id":"a244d22cbced21a4","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T23:34:00.433681Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a244d22cbced21a4","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-15T23:34:03.058338Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:03.058383Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:03.071389Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ce564ad586a3115","to":"a244d22cbced21a4","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T23:34:03.071455Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:03.077655Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ce564ad586a3115","to":"a244d22cbced21a4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T23:34:03.077748Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:03.151329Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	
	
	==> etcd [aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391] <==
	2024/08/15 23:30:35 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/15 23:30:35 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T23:30:36.038879Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:30:36.039388Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T23:30:36.040667Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ce564ad586a3115","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T23:30:36.040807Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.040841Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.040864Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.040921Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.040987Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.041037Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.041048Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.041053Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041066Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041087Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041171Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041197Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041224Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041307Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.044204Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"warn","ts":"2024-08-15T23:30:36.044366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.367147984s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-15T23:30:36.044406Z","caller":"traceutil/trace.go:171","msg":"trace[713965431] range","detail":"{range_begin:; range_end:; }","duration":"2.367201695s","start":"2024-08-15T23:30:33.677197Z","end":"2024-08-15T23:30:36.044398Z","steps":["trace[713965431] 'agreement among raft nodes before linearized reading'  (duration: 2.3671438s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T23:30:36.044435Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T23:30:36.045176Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-15T23:30:36.045199Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-175414","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	
	
	==> kernel <==
	 23:34:50 up 14 min,  0 users,  load average: 0.33, 0.41, 0.27
	Linux ha-175414 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b] <==
	I0815 23:34:15.189069       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:34:25.187716       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:34:25.187785       1 main.go:299] handling current node
	I0815 23:34:25.187803       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:34:25.187811       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:34:25.188050       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:34:25.188068       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:34:25.188184       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:34:25.188219       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:34:35.194017       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:34:35.194461       1 main.go:299] handling current node
	I0815 23:34:35.194522       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:34:35.194535       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:34:35.194723       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:34:35.194760       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:34:35.194891       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:34:35.194923       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:34:45.188349       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:34:45.188574       1 main.go:299] handling current node
	I0815 23:34:45.188604       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:34:45.188625       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:34:45.188776       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:34:45.188808       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:34:45.188892       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:34:45.188911       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a] <==
	I0815 23:29:59.559131       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:30:09.558899       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:30:09.558951       1 main.go:299] handling current node
	I0815 23:30:09.558966       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:30:09.558971       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:30:09.559118       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:30:09.559139       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:30:09.559210       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:30:09.559215       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:30:19.560237       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:30:19.560350       1 main.go:299] handling current node
	I0815 23:30:19.560370       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:30:19.560375       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:30:19.560526       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:30:19.560550       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:30:19.560617       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:30:19.560636       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:30:29.559331       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:30:29.559501       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:30:29.559728       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:30:29.559757       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:30:29.559830       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:30:29.559849       1 main.go:299] handling current node
	I0815 23:30:29.559873       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:30:29.559902       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0] <==
	I0815 23:33:10.716145       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0815 23:33:10.716163       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0815 23:33:10.784770       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 23:33:10.801327       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 23:33:10.801365       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:33:10.801483       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 23:33:10.801520       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 23:33:10.803963       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 23:33:10.804139       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 23:33:10.804140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:33:10.810352       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:33:10.810350       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:33:10.810404       1 policy_source.go:224] refreshing policies
	I0815 23:33:10.815867       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:33:10.815988       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:33:10.816039       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:33:10.816070       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:33:10.816099       1 cache.go:39] Caches are synced for autoregister controller
	W0815 23:33:10.818656       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.19]
	I0815 23:33:10.819865       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 23:33:10.831149       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 23:33:10.835676       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 23:33:10.884798       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:33:11.707541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 23:33:12.049902       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.19 192.168.39.67]
	
	
	==> kube-apiserver [e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06] <==
	I0815 23:32:24.055831       1 options.go:228] external host was not specified, using 192.168.39.67
	I0815 23:32:24.074500       1 server.go:142] Version: v1.31.0
	I0815 23:32:24.074605       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:32:25.183105       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:32:25.200551       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:32:25.200591       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:32:25.200796       1 instance.go:232] Using reconciler: lease
	I0815 23:32:25.202141       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0815 23:32:45.182078       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 23:32:45.182077       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0815 23:32:45.202241       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230] <==
	I0815 23:33:30.245069       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e8f67e26-3f6d-45db-9925-7997ef7eddac", APIVersion:"v1", ResourceVersion:"286", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-97ntk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-97ntk": the object has been modified; please apply your changes to the latest version and try again
	I0815 23:33:30.313873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="44.062599ms"
	I0815 23:33:30.314822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="83.442µs"
	I0815 23:33:44.342809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:33:44.345774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m03"
	I0815 23:33:44.370413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:33:44.386670       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m03"
	I0815 23:33:44.532478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.978095ms"
	I0815 23:33:44.535086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="173.729µs"
	I0815 23:33:49.125553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m03"
	I0815 23:33:51.466702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m02"
	I0815 23:33:53.219849       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m03"
	I0815 23:33:53.246982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m03"
	I0815 23:33:54.126575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m03"
	I0815 23:33:54.271058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.98µs"
	I0815 23:33:59.214355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:34:10.676901       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.479006ms"
	I0815 23:34:10.677125       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.701µs"
	I0815 23:34:11.590413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:34:11.699623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:34:24.011113       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m03"
	I0815 23:34:42.737010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:34:42.738124       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-175414-m04"
	I0815 23:34:42.758117       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:34:44.154642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	
	
	==> kube-controller-manager [f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9] <==
	I0815 23:32:24.764503       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:32:25.559011       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:32:25.559103       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:32:25.561036       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:32:25.561202       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:32:25.561797       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:32:25.561732       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0815 23:32:46.208590       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.67:8443/healthz\": dial tcp 192.168.39.67:8443: connect: connection refused"
	
	
	==> kube-proxy [602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:32:28.426908       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 23:32:31.499383       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 23:32:34.570810       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 23:32:40.715090       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 23:32:49.932365       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0815 23:33:07.740814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E0815 23:33:07.740995       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:33:07.786870       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:33:07.786978       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:33:07.787024       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:33:07.790460       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:33:07.790870       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:33:07.790924       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:33:07.792801       1 config.go:197] "Starting service config controller"
	I0815 23:33:07.792874       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:33:07.792914       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:33:07.792942       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:33:07.793619       1 config.go:326] "Starting node config controller"
	I0815 23:33:07.793695       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:33:07.895395       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:33:07.895505       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:33:07.895592       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137] <==
	E0815 23:29:24.106913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:24.107313       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:24.107383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:27.178725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:27.178849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:30.252428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:30.252655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:30.251409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:30.253305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:39.467585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:39.467841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:42.540650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:42.540855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:45.612535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:45.612809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:57.902359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:57.902437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:57.903369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:57.903527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:30:04.043786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:30:04.043888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:30:28.619962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:30:28.620139       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:30:28.620301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:30:28.620360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1] <==
	W0815 23:33:01.407343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.67:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:01.407402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.67:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:01.765507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.67:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:01.765652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.67:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:01.810031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.67:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:01.810231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.67:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:02.322323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.67:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:02.322368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.67:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:02.424034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.67:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:02.424097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.67:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:02.610496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.67:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:02.610563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.67:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:03.117431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.67:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:03.117505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.67:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:03.371890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.67:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:03.372010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.67:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:03.450601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.67:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:03.450722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.67:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:06.059969       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.67:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:06.060088       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.67:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:07.286368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.67:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:07.286534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.67:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:07.600069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.67:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:07.600193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.67:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	I0815 23:33:19.518338       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755] <==
	E0815 23:24:31.009629       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m6wl5\": pod kindnet-m6wl5 is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m6wl5" node="ha-175414-m04"
	E0815 23:24:31.009730       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod efa64311-983a-46d2-88b4-306fc316f564(kube-system/kindnet-m6wl5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-m6wl5"
	E0815 23:24:31.009767       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m6wl5\": pod kindnet-m6wl5 is already assigned to node \"ha-175414-m04\"" pod="kube-system/kindnet-m6wl5"
	I0815 23:24:31.009797       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m6wl5" node="ha-175414-m04"
	E0815 23:24:31.089615       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w68mv\": pod kube-proxy-w68mv is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w68mv" node="ha-175414-m04"
	E0815 23:24:31.093322       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8dece2a7-e846-45c9-81a2-a5766b3e2a59(kube-system/kube-proxy-w68mv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w68mv"
	E0815 23:24:31.093536       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w68mv\": pod kube-proxy-w68mv is already assigned to node \"ha-175414-m04\"" pod="kube-system/kube-proxy-w68mv"
	I0815 23:24:31.093743       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w68mv" node="ha-175414-m04"
	E0815 23:24:31.092964       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-442dg\": pod kindnet-442dg is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-442dg" node="ha-175414-m04"
	E0815 23:24:31.099497       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a7abeee9-7619-4535-9654-3a395026f469(kube-system/kindnet-442dg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-442dg"
	E0815 23:24:31.099565       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-442dg\": pod kindnet-442dg is already assigned to node \"ha-175414-m04\"" pod="kube-system/kindnet-442dg"
	I0815 23:24:31.099706       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-442dg" node="ha-175414-m04"
	E0815 23:30:27.195115       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0815 23:30:27.360447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0815 23:30:28.865167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0815 23:30:30.762656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0815 23:30:32.113696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0815 23:30:32.185637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0815 23:30:33.097130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0815 23:30:33.493938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0815 23:30:33.873939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0815 23:30:34.643051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0815 23:30:35.429671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	I0815 23:30:35.738821       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0815 23:30:35.738936       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 15 23:33:41 ha-175414 kubelet[1322]: E0815 23:33:41.084015    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764821083462132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:33:41 ha-175414 kubelet[1322]: E0815 23:33:41.084049    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764821083462132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:33:46 ha-175414 kubelet[1322]: I0815 23:33:46.839474    1322 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-175414" podUID="6b98571e-8ad5-45e0-acbc-d0e875647a69"
	Aug 15 23:33:46 ha-175414 kubelet[1322]: I0815 23:33:46.865425    1322 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-175414"
	Aug 15 23:33:51 ha-175414 kubelet[1322]: E0815 23:33:51.085900    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764831085662184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:33:51 ha-175414 kubelet[1322]: E0815 23:33:51.085941    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764831085662184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:33:53 ha-175414 kubelet[1322]: I0815 23:33:53.839534    1322 scope.go:117] "RemoveContainer" containerID="31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb"
	Aug 15 23:33:54 ha-175414 kubelet[1322]: I0815 23:33:54.623392    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-175414" podStartSLOduration=8.623342012 podStartE2EDuration="8.623342012s" podCreationTimestamp="2024-08-15 23:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-15 23:33:50.860214275 +0000 UTC m=+750.205717562" watchObservedRunningTime="2024-08-15 23:33:54.623342012 +0000 UTC m=+753.968845298"
	Aug 15 23:34:01 ha-175414 kubelet[1322]: E0815 23:34:01.093620    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764841091750914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:01 ha-175414 kubelet[1322]: E0815 23:34:01.093714    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764841091750914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:11 ha-175414 kubelet[1322]: E0815 23:34:11.098906    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764851097958028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:11 ha-175414 kubelet[1322]: E0815 23:34:11.098964    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764851097958028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:20 ha-175414 kubelet[1322]: E0815 23:34:20.858772    1322 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:34:20 ha-175414 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:34:20 ha-175414 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:34:20 ha-175414 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:34:20 ha-175414 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:34:21 ha-175414 kubelet[1322]: E0815 23:34:21.100628    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764861099868991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:21 ha-175414 kubelet[1322]: E0815 23:34:21.100654    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764861099868991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:31 ha-175414 kubelet[1322]: E0815 23:34:31.103687    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764871102500896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:31 ha-175414 kubelet[1322]: E0815 23:34:31.104001    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764871102500896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:41 ha-175414 kubelet[1322]: E0815 23:34:41.106947    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764881106109219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:41 ha-175414 kubelet[1322]: E0815 23:34:41.107036    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764881106109219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:51 ha-175414 kubelet[1322]: E0815 23:34:51.109499    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764891109038460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:34:51 ha-175414 kubelet[1322]: E0815 23:34:51.109553    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764891109038460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 23:34:49.664250   38349 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19452-12919/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-175414 -n ha-175414
helpers_test.go:261: (dbg) Run:  kubectl --context ha-175414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 stop -v=7 --alsologtostderr: exit status 82 (2m0.473162191s)

                                                
                                                
-- stdout --
	* Stopping node "ha-175414-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:35:08.557085   38760 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:35:08.557217   38760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:35:08.557226   38760 out.go:358] Setting ErrFile to fd 2...
	I0815 23:35:08.557230   38760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:35:08.557420   38760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:35:08.557630   38760 out.go:352] Setting JSON to false
	I0815 23:35:08.557701   38760 mustload.go:65] Loading cluster: ha-175414
	I0815 23:35:08.558093   38760 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:35:08.558183   38760 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:35:08.558362   38760 mustload.go:65] Loading cluster: ha-175414
	I0815 23:35:08.558484   38760 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:35:08.558505   38760 stop.go:39] StopHost: ha-175414-m04
	I0815 23:35:08.558866   38760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:35:08.558906   38760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:35:08.574276   38760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0815 23:35:08.574704   38760 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:35:08.575240   38760 main.go:141] libmachine: Using API Version  1
	I0815 23:35:08.575264   38760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:35:08.575609   38760 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:35:08.577885   38760 out.go:177] * Stopping node "ha-175414-m04"  ...
	I0815 23:35:08.579287   38760 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 23:35:08.579314   38760 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:35:08.579565   38760 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 23:35:08.579599   38760 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:35:08.582848   38760 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:35:08.583304   38760 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:34:37 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:35:08.583335   38760 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:35:08.583501   38760 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:35:08.583663   38760 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:35:08.583796   38760 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:35:08.583943   38760 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	I0815 23:35:08.672817   38760 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 23:35:08.725983   38760 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 23:35:08.779310   38760 main.go:141] libmachine: Stopping "ha-175414-m04"...
	I0815 23:35:08.779361   38760 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:35:08.780683   38760 main.go:141] libmachine: (ha-175414-m04) Calling .Stop
	I0815 23:35:08.784493   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 0/120
	I0815 23:35:09.785752   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 1/120
	I0815 23:35:10.787134   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 2/120
	I0815 23:35:11.788433   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 3/120
	I0815 23:35:12.789666   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 4/120
	I0815 23:35:13.791775   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 5/120
	I0815 23:35:14.793036   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 6/120
	I0815 23:35:15.794609   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 7/120
	I0815 23:35:16.796176   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 8/120
	I0815 23:35:17.797408   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 9/120
	I0815 23:35:18.798672   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 10/120
	I0815 23:35:19.800444   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 11/120
	I0815 23:35:20.802012   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 12/120
	I0815 23:35:21.803425   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 13/120
	I0815 23:35:22.804770   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 14/120
	I0815 23:35:23.806596   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 15/120
	I0815 23:35:24.808434   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 16/120
	I0815 23:35:25.810004   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 17/120
	I0815 23:35:26.811378   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 18/120
	I0815 23:35:27.812782   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 19/120
	I0815 23:35:28.814927   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 20/120
	I0815 23:35:29.816239   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 21/120
	I0815 23:35:30.817752   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 22/120
	I0815 23:35:31.819157   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 23/120
	I0815 23:35:32.820875   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 24/120
	I0815 23:35:33.822956   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 25/120
	I0815 23:35:34.824415   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 26/120
	I0815 23:35:35.825666   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 27/120
	I0815 23:35:36.827113   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 28/120
	I0815 23:35:37.828871   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 29/120
	I0815 23:35:38.831179   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 30/120
	I0815 23:35:39.832889   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 31/120
	I0815 23:35:40.835133   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 32/120
	I0815 23:35:41.836296   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 33/120
	I0815 23:35:42.837571   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 34/120
	I0815 23:35:43.839795   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 35/120
	I0815 23:35:44.841071   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 36/120
	I0815 23:35:45.842622   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 37/120
	I0815 23:35:46.844314   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 38/120
	I0815 23:35:47.845917   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 39/120
	I0815 23:35:48.848090   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 40/120
	I0815 23:35:49.849402   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 41/120
	I0815 23:35:50.850747   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 42/120
	I0815 23:35:51.852166   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 43/120
	I0815 23:35:52.853656   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 44/120
	I0815 23:35:53.855556   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 45/120
	I0815 23:35:54.856862   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 46/120
	I0815 23:35:55.858180   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 47/120
	I0815 23:35:56.860285   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 48/120
	I0815 23:35:57.861532   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 49/120
	I0815 23:35:58.863590   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 50/120
	I0815 23:35:59.865092   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 51/120
	I0815 23:36:00.866362   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 52/120
	I0815 23:36:01.868529   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 53/120
	I0815 23:36:02.870121   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 54/120
	I0815 23:36:03.872267   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 55/120
	I0815 23:36:04.873886   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 56/120
	I0815 23:36:05.875604   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 57/120
	I0815 23:36:06.877368   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 58/120
	I0815 23:36:07.879093   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 59/120
	I0815 23:36:08.881344   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 60/120
	I0815 23:36:09.882866   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 61/120
	I0815 23:36:10.884411   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 62/120
	I0815 23:36:11.885626   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 63/120
	I0815 23:36:12.886979   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 64/120
	I0815 23:36:13.888877   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 65/120
	I0815 23:36:14.890164   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 66/120
	I0815 23:36:15.892421   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 67/120
	I0815 23:36:16.893683   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 68/120
	I0815 23:36:17.895078   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 69/120
	I0815 23:36:18.897083   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 70/120
	I0815 23:36:19.898415   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 71/120
	I0815 23:36:20.900401   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 72/120
	I0815 23:36:21.901793   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 73/120
	I0815 23:36:22.903244   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 74/120
	I0815 23:36:23.905258   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 75/120
	I0815 23:36:24.906707   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 76/120
	I0815 23:36:25.908824   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 77/120
	I0815 23:36:26.910339   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 78/120
	I0815 23:36:27.911824   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 79/120
	I0815 23:36:28.913418   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 80/120
	I0815 23:36:29.915267   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 81/120
	I0815 23:36:30.916540   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 82/120
	I0815 23:36:31.918429   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 83/120
	I0815 23:36:32.919666   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 84/120
	I0815 23:36:33.921674   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 85/120
	I0815 23:36:34.923406   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 86/120
	I0815 23:36:35.925059   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 87/120
	I0815 23:36:36.926407   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 88/120
	I0815 23:36:37.927844   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 89/120
	I0815 23:36:38.929458   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 90/120
	I0815 23:36:39.930970   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 91/120
	I0815 23:36:40.932361   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 92/120
	I0815 23:36:41.934154   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 93/120
	I0815 23:36:42.935507   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 94/120
	I0815 23:36:43.937473   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 95/120
	I0815 23:36:44.938684   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 96/120
	I0815 23:36:45.939967   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 97/120
	I0815 23:36:46.941384   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 98/120
	I0815 23:36:47.943029   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 99/120
	I0815 23:36:48.945181   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 100/120
	I0815 23:36:49.946564   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 101/120
	I0815 23:36:50.948518   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 102/120
	I0815 23:36:51.949898   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 103/120
	I0815 23:36:52.951218   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 104/120
	I0815 23:36:53.953357   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 105/120
	I0815 23:36:54.955323   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 106/120
	I0815 23:36:55.956748   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 107/120
	I0815 23:36:56.958134   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 108/120
	I0815 23:36:57.960218   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 109/120
	I0815 23:36:58.962271   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 110/120
	I0815 23:36:59.964401   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 111/120
	I0815 23:37:00.966586   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 112/120
	I0815 23:37:01.967836   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 113/120
	I0815 23:37:02.969256   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 114/120
	I0815 23:37:03.971394   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 115/120
	I0815 23:37:04.972971   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 116/120
	I0815 23:37:05.974510   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 117/120
	I0815 23:37:06.976485   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 118/120
	I0815 23:37:07.978013   38760 main.go:141] libmachine: (ha-175414-m04) Waiting for machine to stop 119/120
	I0815 23:37:08.978512   38760 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 23:37:08.978568   38760 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 23:37:08.980698   38760 out.go:201] 
	W0815 23:37:08.982434   38760 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 23:37:08.982460   38760 out.go:270] * 
	* 
	W0815 23:37:08.985565   38760 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 23:37:08.987088   38760 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-175414 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr: exit status 3 (19.053797448s)

                                                
                                                
-- stdout --
	ha-175414
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175414-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:37:09.030884   39188 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:37:09.031003   39188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:37:09.031014   39188 out.go:358] Setting ErrFile to fd 2...
	I0815 23:37:09.031018   39188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:37:09.031225   39188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:37:09.031421   39188 out.go:352] Setting JSON to false
	I0815 23:37:09.031447   39188 mustload.go:65] Loading cluster: ha-175414
	I0815 23:37:09.031547   39188 notify.go:220] Checking for updates...
	I0815 23:37:09.031908   39188 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:37:09.031925   39188 status.go:255] checking status of ha-175414 ...
	I0815 23:37:09.032311   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.032376   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.056958   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0815 23:37:09.057444   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.058073   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.058174   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.058536   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.058713   39188 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:37:09.060514   39188 status.go:330] ha-175414 host status = "Running" (err=<nil>)
	I0815 23:37:09.060530   39188 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:37:09.060859   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.060894   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.076828   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I0815 23:37:09.077273   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.077738   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.077751   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.078106   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.078307   39188 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:37:09.081245   39188 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:37:09.081613   39188 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:37:09.081633   39188 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:37:09.081755   39188 host.go:66] Checking if "ha-175414" exists ...
	I0815 23:37:09.082087   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.082125   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.097148   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0815 23:37:09.097611   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.098138   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.098161   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.098454   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.098599   39188 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:37:09.098784   39188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:37:09.098821   39188 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:37:09.101513   39188 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:37:09.101993   39188 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:37:09.102013   39188 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:37:09.102142   39188 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:37:09.102275   39188 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:37:09.102436   39188 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:37:09.102566   39188 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:37:09.188067   39188 ssh_runner.go:195] Run: systemctl --version
	I0815 23:37:09.195018   39188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:37:09.211659   39188 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:37:09.211688   39188 api_server.go:166] Checking apiserver status ...
	I0815 23:37:09.211736   39188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:37:09.228082   39188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5064/cgroup
	W0815 23:37:09.239698   39188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5064/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:37:09.239759   39188 ssh_runner.go:195] Run: ls
	I0815 23:37:09.244606   39188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:37:09.249204   39188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:37:09.249229   39188 status.go:422] ha-175414 apiserver status = Running (err=<nil>)
	I0815 23:37:09.249241   39188 status.go:257] ha-175414 status: &{Name:ha-175414 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:37:09.249262   39188 status.go:255] checking status of ha-175414-m02 ...
	I0815 23:37:09.249585   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.249618   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.264272   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0815 23:37:09.264690   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.265150   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.265168   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.265503   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.265661   39188 main.go:141] libmachine: (ha-175414-m02) Calling .GetState
	I0815 23:37:09.267395   39188 status.go:330] ha-175414-m02 host status = "Running" (err=<nil>)
	I0815 23:37:09.267411   39188 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:37:09.267705   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.267733   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.282420   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I0815 23:37:09.282789   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.283288   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.283311   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.283648   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.283853   39188 main.go:141] libmachine: (ha-175414-m02) Calling .GetIP
	I0815 23:37:09.286893   39188 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:37:09.287258   39188 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:32:30 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:37:09.287303   39188 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:37:09.287458   39188 host.go:66] Checking if "ha-175414-m02" exists ...
	I0815 23:37:09.287732   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.287771   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.302208   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39383
	I0815 23:37:09.302646   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.303072   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.303089   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.303395   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.303603   39188 main.go:141] libmachine: (ha-175414-m02) Calling .DriverName
	I0815 23:37:09.303781   39188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:37:09.303804   39188 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHHostname
	I0815 23:37:09.306189   39188 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:37:09.306521   39188 main.go:141] libmachine: (ha-175414-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:bf:67", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:32:30 +0000 UTC Type:0 Mac:52:54:00:3f:bf:67 Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-175414-m02 Clientid:01:52:54:00:3f:bf:67}
	I0815 23:37:09.306553   39188 main.go:141] libmachine: (ha-175414-m02) DBG | domain ha-175414-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:3f:bf:67 in network mk-ha-175414
	I0815 23:37:09.306702   39188 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHPort
	I0815 23:37:09.306867   39188 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHKeyPath
	I0815 23:37:09.307019   39188 main.go:141] libmachine: (ha-175414-m02) Calling .GetSSHUsername
	I0815 23:37:09.307140   39188 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m02/id_rsa Username:docker}
	I0815 23:37:09.391619   39188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:37:09.411078   39188 kubeconfig.go:125] found "ha-175414" server: "https://192.168.39.254:8443"
	I0815 23:37:09.411109   39188 api_server.go:166] Checking apiserver status ...
	I0815 23:37:09.411154   39188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:37:09.426839   39188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup
	W0815 23:37:09.437394   39188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1537/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:37:09.437444   39188 ssh_runner.go:195] Run: ls
	I0815 23:37:09.441898   39188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 23:37:09.446208   39188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 23:37:09.446229   39188 status.go:422] ha-175414-m02 apiserver status = Running (err=<nil>)
	I0815 23:37:09.446240   39188 status.go:257] ha-175414-m02 status: &{Name:ha-175414-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:37:09.446260   39188 status.go:255] checking status of ha-175414-m04 ...
	I0815 23:37:09.446639   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.446680   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.461486   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0815 23:37:09.461925   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.462453   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.462468   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.462771   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.463026   39188 main.go:141] libmachine: (ha-175414-m04) Calling .GetState
	I0815 23:37:09.464441   39188 status.go:330] ha-175414-m04 host status = "Running" (err=<nil>)
	I0815 23:37:09.464454   39188 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:37:09.464715   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.464743   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.479230   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I0815 23:37:09.479594   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.480193   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.480208   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.480547   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.480778   39188 main.go:141] libmachine: (ha-175414-m04) Calling .GetIP
	I0815 23:37:09.483949   39188 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:37:09.484468   39188 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:34:37 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:37:09.484495   39188 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:37:09.484666   39188 host.go:66] Checking if "ha-175414-m04" exists ...
	I0815 23:37:09.484949   39188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:37:09.484984   39188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:37:09.499532   39188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I0815 23:37:09.499950   39188 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:37:09.500412   39188 main.go:141] libmachine: Using API Version  1
	I0815 23:37:09.500444   39188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:37:09.500727   39188 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:37:09.500916   39188 main.go:141] libmachine: (ha-175414-m04) Calling .DriverName
	I0815 23:37:09.501104   39188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:37:09.501121   39188 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHHostname
	I0815 23:37:09.503861   39188 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:37:09.504253   39188 main.go:141] libmachine: (ha-175414-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:de:3d", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:34:37 +0000 UTC Type:0 Mac:52:54:00:69:de:3d Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-175414-m04 Clientid:01:52:54:00:69:de:3d}
	I0815 23:37:09.504277   39188 main.go:141] libmachine: (ha-175414-m04) DBG | domain ha-175414-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:69:de:3d in network mk-ha-175414
	I0815 23:37:09.504412   39188 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHPort
	I0815 23:37:09.504568   39188 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHKeyPath
	I0815 23:37:09.504737   39188 main.go:141] libmachine: (ha-175414-m04) Calling .GetSSHUsername
	I0815 23:37:09.504884   39188 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414-m04/id_rsa Username:docker}
	W0815 23:37:28.042122   39188 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0815 23:37:28.042199   39188 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0815 23:37:28.042239   39188 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0815 23:37:28.042247   39188 status.go:257] ha-175414-m04 status: &{Name:ha-175414-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0815 23:37:28.042265   39188 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-175414 -n ha-175414
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-175414 logs -n 25: (1.717729345s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-175414 ssh -n ha-175414-m02 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04:/home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m04 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp testdata/cp-test.txt                                               | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414:/home/docker/cp-test_ha-175414-m04_ha-175414.txt                      |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414 sudo cat                                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414.txt                                |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m02:/home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m02 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m03:/home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n                                                                | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | ha-175414-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-175414 ssh -n ha-175414-m03 sudo cat                                         | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC | 15 Aug 24 23:25 UTC |
	|         | /home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-175414 node stop m02 -v=7                                                    | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:25 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-175414 node start m02 -v=7                                                   | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:27 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-175414 -v=7                                                          | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-175414 -v=7                                                               | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:28 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-175414 --wait=true -v=7                                                   | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:30 UTC | 15 Aug 24 23:34 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-175414                                                               | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:34 UTC |                     |
	| node    | ha-175414 node delete m03 -v=7                                                  | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:34 UTC | 15 Aug 24 23:35 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-175414 stop -v=7                                                             | ha-175414 | jenkins | v1.33.1 | 15 Aug 24 23:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:30:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:30:34.642752   36963 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:30:34.642880   36963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:30:34.642890   36963 out.go:358] Setting ErrFile to fd 2...
	I0815 23:30:34.642896   36963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:30:34.643108   36963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:30:34.644159   36963 out.go:352] Setting JSON to false
	I0815 23:30:34.645446   36963 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4335,"bootTime":1723760300,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:30:34.645516   36963 start.go:139] virtualization: kvm guest
	I0815 23:30:34.647349   36963 out.go:177] * [ha-175414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:30:34.649060   36963 notify.go:220] Checking for updates...
	I0815 23:30:34.649072   36963 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:30:34.650519   36963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:30:34.651631   36963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:30:34.652723   36963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:30:34.653920   36963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:30:34.655131   36963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:30:34.656847   36963 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:30:34.656957   36963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:30:34.657396   36963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:30:34.657436   36963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:30:34.673264   36963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0815 23:30:34.673746   36963 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:30:34.674352   36963 main.go:141] libmachine: Using API Version  1
	I0815 23:30:34.674371   36963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:30:34.674732   36963 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:30:34.674973   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:30:34.711109   36963 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 23:30:34.712353   36963 start.go:297] selected driver: kvm2
	I0815 23:30:34.712376   36963 start.go:901] validating driver "kvm2" against &{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.32 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:30:34.712582   36963 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:30:34.712934   36963 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:30:34.713012   36963 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:30:34.727574   36963 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:30:34.728246   36963 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:30:34.728314   36963 cni.go:84] Creating CNI manager for ""
	I0815 23:30:34.728329   36963 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 23:30:34.728396   36963 start.go:340] cluster config:
	{Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.32 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:30:34.728554   36963 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:30:34.730537   36963 out.go:177] * Starting "ha-175414" primary control-plane node in "ha-175414" cluster
	I0815 23:30:34.731739   36963 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:30:34.731776   36963 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:30:34.731783   36963 cache.go:56] Caching tarball of preloaded images
	I0815 23:30:34.731866   36963 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:30:34.731882   36963 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:30:34.731995   36963 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/config.json ...
	I0815 23:30:34.732216   36963 start.go:360] acquireMachinesLock for ha-175414: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:30:34.732278   36963 start.go:364] duration metric: took 36.827µs to acquireMachinesLock for "ha-175414"
	I0815 23:30:34.732306   36963 start.go:96] Skipping create...Using existing machine configuration
	I0815 23:30:34.732318   36963 fix.go:54] fixHost starting: 
	I0815 23:30:34.732562   36963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:30:34.732590   36963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:30:34.748338   36963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0815 23:30:34.748768   36963 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:30:34.749202   36963 main.go:141] libmachine: Using API Version  1
	I0815 23:30:34.749222   36963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:30:34.749532   36963 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:30:34.749723   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:30:34.749908   36963 main.go:141] libmachine: (ha-175414) Calling .GetState
	I0815 23:30:34.751646   36963 fix.go:112] recreateIfNeeded on ha-175414: state=Running err=<nil>
	W0815 23:30:34.751663   36963 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 23:30:34.753637   36963 out.go:177] * Updating the running kvm2 "ha-175414" VM ...
	I0815 23:30:34.754817   36963 machine.go:93] provisionDockerMachine start ...
	I0815 23:30:34.754838   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:30:34.755044   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:34.757515   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:34.757974   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:34.758012   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:34.758140   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:34.758293   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:34.758437   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:34.758581   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:34.758720   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:30:34.758947   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:30:34.758965   36963 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 23:30:34.874050   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414
	
	I0815 23:30:34.874090   36963 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:30:34.874332   36963 buildroot.go:166] provisioning hostname "ha-175414"
	I0815 23:30:34.874372   36963 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:30:34.874606   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:34.877072   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:34.877433   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:34.877460   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:34.877592   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:34.877739   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:34.877905   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:34.878051   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:34.878216   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:30:34.878393   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:30:34.878406   36963 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-175414 && echo "ha-175414" | sudo tee /etc/hostname
	I0815 23:30:35.004425   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-175414
	
	I0815 23:30:35.004446   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:35.007473   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.007837   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.007863   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.008040   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:35.008194   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.008322   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.008408   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:35.008533   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:30:35.008730   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:30:35.008754   36963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-175414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-175414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-175414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:30:35.123123   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:30:35.123161   36963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:30:35.123204   36963 buildroot.go:174] setting up certificates
	I0815 23:30:35.123223   36963 provision.go:84] configureAuth start
	I0815 23:30:35.123233   36963 main.go:141] libmachine: (ha-175414) Calling .GetMachineName
	I0815 23:30:35.123488   36963 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:30:35.126121   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.126506   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.126534   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.126685   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:35.129150   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.129489   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.129515   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.129732   36963 provision.go:143] copyHostCerts
	I0815 23:30:35.129795   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:30:35.129858   36963 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:30:35.129881   36963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:30:35.129966   36963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:30:35.130083   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:30:35.130107   36963 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:30:35.130115   36963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:30:35.130153   36963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:30:35.130229   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:30:35.130252   36963 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:30:35.130261   36963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:30:35.130290   36963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:30:35.130384   36963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.ha-175414 san=[127.0.0.1 192.168.39.67 ha-175414 localhost minikube]
	I0815 23:30:35.447331   36963 provision.go:177] copyRemoteCerts
	I0815 23:30:35.447380   36963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:30:35.447403   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:35.449888   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.450205   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.450230   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.450434   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:35.450620   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.450771   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:35.450900   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:30:35.536921   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:30:35.537020   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:30:35.564904   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:30:35.565000   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0815 23:30:35.593454   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:30:35.593532   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 23:30:35.620556   36963 provision.go:87] duration metric: took 497.31969ms to configureAuth
	I0815 23:30:35.620590   36963 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:30:35.620831   36963 config.go:182] Loaded profile config "ha-175414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:30:35.620928   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:30:35.623626   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.624030   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:30:35.624063   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:30:35.624243   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:30:35.624435   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.624635   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:30:35.624770   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:30:35.624954   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:30:35.625149   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:30:35.625170   36963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:32:06.386399   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:32:06.386431   36963 machine.go:96] duration metric: took 1m31.6315979s to provisionDockerMachine
	I0815 23:32:06.386447   36963 start.go:293] postStartSetup for "ha-175414" (driver="kvm2")
	I0815 23:32:06.386462   36963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:32:06.386483   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.386827   36963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:32:06.386859   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.390005   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.390379   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.390401   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.390579   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.390754   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.390941   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.391077   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:32:06.478027   36963 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:32:06.482432   36963 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:32:06.482464   36963 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:32:06.482535   36963 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:32:06.482640   36963 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:32:06.482653   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:32:06.482755   36963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:32:06.492779   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:32:06.516876   36963 start.go:296] duration metric: took 130.414074ms for postStartSetup
	I0815 23:32:06.516914   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.517200   36963 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 23:32:06.517223   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.519766   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.520222   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.520250   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.520377   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.520592   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.520748   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.520886   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	W0815 23:32:06.604520   36963 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0815 23:32:06.604552   36963 fix.go:56] duration metric: took 1m31.872235233s for fixHost
	I0815 23:32:06.604578   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.607164   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.607491   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.607526   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.607680   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.607875   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.608011   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.608112   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.608237   36963 main.go:141] libmachine: Using SSH client type: native
	I0815 23:32:06.608450   36963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0815 23:32:06.608464   36963 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:32:06.718720   36963 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723764726.686038597
	
	I0815 23:32:06.718741   36963 fix.go:216] guest clock: 1723764726.686038597
	I0815 23:32:06.718752   36963 fix.go:229] Guest: 2024-08-15 23:32:06.686038597 +0000 UTC Remote: 2024-08-15 23:32:06.604561002 +0000 UTC m=+91.996716584 (delta=81.477595ms)
	I0815 23:32:06.718791   36963 fix.go:200] guest clock delta is within tolerance: 81.477595ms
	I0815 23:32:06.718798   36963 start.go:83] releasing machines lock for "ha-175414", held for 1m31.986499668s
	I0815 23:32:06.718838   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.719111   36963 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:32:06.721494   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.721835   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.721876   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.722070   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.722609   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.722790   36963 main.go:141] libmachine: (ha-175414) Calling .DriverName
	I0815 23:32:06.722886   36963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:32:06.722923   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.723024   36963 ssh_runner.go:195] Run: cat /version.json
	I0815 23:32:06.723051   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHHostname
	I0815 23:32:06.725665   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.725767   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.726031   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.726059   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.726200   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.726208   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:06.726258   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:06.726342   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.726405   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHPort
	I0815 23:32:06.726534   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.726599   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHKeyPath
	I0815 23:32:06.726750   36963 main.go:141] libmachine: (ha-175414) Calling .GetSSHUsername
	I0815 23:32:06.726756   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:32:06.726879   36963 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/ha-175414/id_rsa Username:docker}
	I0815 23:32:06.829038   36963 ssh_runner.go:195] Run: systemctl --version
	I0815 23:32:06.835465   36963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:32:07.000226   36963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 23:32:07.007179   36963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:32:07.007251   36963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:32:07.016736   36963 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 23:32:07.016761   36963 start.go:495] detecting cgroup driver to use...
	I0815 23:32:07.016825   36963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:32:07.033938   36963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:32:07.048275   36963 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:32:07.048337   36963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:32:07.062197   36963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:32:07.075852   36963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:32:07.231043   36963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:32:07.418512   36963 docker.go:233] disabling docker service ...
	I0815 23:32:07.418573   36963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:32:07.470311   36963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:32:07.499546   36963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:32:07.667244   36963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:32:07.823075   36963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:32:07.837867   36963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:32:07.857204   36963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:32:07.857269   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.868611   36963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:32:07.868671   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.879608   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.890478   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.901456   36963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:32:07.912582   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.923869   36963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.935473   36963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:32:07.946916   36963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:32:07.956914   36963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:32:07.967164   36963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:32:08.134497   36963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:32:17.908997   36963 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.774466167s)
	I0815 23:32:17.909034   36963 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:32:17.909089   36963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:32:17.915543   36963 start.go:563] Will wait 60s for crictl version
	I0815 23:32:17.915604   36963 ssh_runner.go:195] Run: which crictl
	I0815 23:32:17.920068   36963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:32:17.958753   36963 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:32:17.958827   36963 ssh_runner.go:195] Run: crio --version
	I0815 23:32:17.988670   36963 ssh_runner.go:195] Run: crio --version
	I0815 23:32:18.024173   36963 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:32:18.025405   36963 main.go:141] libmachine: (ha-175414) Calling .GetIP
	I0815 23:32:18.027801   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:18.028125   36963 main.go:141] libmachine: (ha-175414) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:98:13", ip: ""} in network mk-ha-175414: {Iface:virbr1 ExpiryTime:2024-08-16 00:20:53 +0000 UTC Type:0 Mac:52:54:00:f0:98:13 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-175414 Clientid:01:52:54:00:f0:98:13}
	I0815 23:32:18.028147   36963 main.go:141] libmachine: (ha-175414) DBG | domain ha-175414 has defined IP address 192.168.39.67 and MAC address 52:54:00:f0:98:13 in network mk-ha-175414
	I0815 23:32:18.028340   36963 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:32:18.033587   36963 kubeadm.go:883] updating cluster {Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.32 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 23:32:18.033708   36963 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:32:18.033744   36963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:32:18.088265   36963 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:32:18.088287   36963 crio.go:433] Images already preloaded, skipping extraction
	I0815 23:32:18.088338   36963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:32:18.125576   36963 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:32:18.125599   36963 cache_images.go:84] Images are preloaded, skipping loading
	I0815 23:32:18.125606   36963 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.31.0 crio true true} ...
	I0815 23:32:18.125719   36963 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-175414 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:32:18.125789   36963 ssh_runner.go:195] Run: crio config
	I0815 23:32:18.174872   36963 cni.go:84] Creating CNI manager for ""
	I0815 23:32:18.174888   36963 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 23:32:18.174897   36963 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 23:32:18.174921   36963 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-175414 NodeName:ha-175414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 23:32:18.175055   36963 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-175414"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 23:32:18.175074   36963 kube-vip.go:115] generating kube-vip config ...
	I0815 23:32:18.175112   36963 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 23:32:18.186777   36963 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 23:32:18.186895   36963 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 23:32:18.186957   36963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:32:18.196674   36963 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 23:32:18.196734   36963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 23:32:18.206989   36963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0815 23:32:18.224416   36963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:32:18.242040   36963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0815 23:32:18.259472   36963 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 23:32:18.277958   36963 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 23:32:18.281960   36963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:32:18.438386   36963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:32:18.453156   36963 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414 for IP: 192.168.39.67
	I0815 23:32:18.453182   36963 certs.go:194] generating shared ca certs ...
	I0815 23:32:18.453203   36963 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:32:18.453386   36963 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:32:18.453447   36963 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:32:18.453463   36963 certs.go:256] generating profile certs ...
	I0815 23:32:18.453584   36963 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/client.key
	I0815 23:32:18.453624   36963 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.40510575
	I0815 23:32:18.453651   36963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.40510575 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67 192.168.39.19 192.168.39.100 192.168.39.254]
	I0815 23:32:18.622827   36963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.40510575 ...
	I0815 23:32:18.622856   36963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.40510575: {Name:mkeb549781490d3c87bc4f21e245a8f5b0f891cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:32:18.623061   36963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.40510575 ...
	I0815 23:32:18.623076   36963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.40510575: {Name:mkdea78273ad07797106df7f96e935f9a1aaa6ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:32:18.623175   36963 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt.40510575 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt
	I0815 23:32:18.623347   36963 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key.40510575 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key
	I0815 23:32:18.623473   36963 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key
	I0815 23:32:18.623488   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:32:18.623500   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:32:18.623513   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:32:18.623527   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:32:18.623540   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:32:18.623553   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:32:18.623567   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:32:18.623579   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:32:18.623629   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:32:18.623655   36963 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:32:18.623665   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:32:18.623685   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:32:18.623704   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:32:18.623726   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:32:18.623772   36963 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:32:18.623803   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:32:18.623817   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:32:18.623831   36963 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:32:18.624389   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:32:18.651337   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:32:18.675919   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:32:18.700888   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:32:18.724935   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 23:32:18.748899   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 23:32:18.773777   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:32:18.798899   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/ha-175414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 23:32:18.824206   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:32:18.853444   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:32:18.882367   36963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:32:18.910136   36963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 23:32:18.929146   36963 ssh_runner.go:195] Run: openssl version
	I0815 23:32:18.935934   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:32:18.947225   36963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:32:18.951968   36963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:32:18.952022   36963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:32:18.957858   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:32:18.967678   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:32:18.978633   36963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:32:18.983247   36963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:32:18.983300   36963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:32:18.989012   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:32:18.998524   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:32:19.009833   36963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:32:19.014485   36963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:32:19.014534   36963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:32:19.020120   36963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:32:19.029697   36963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:32:19.034636   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 23:32:19.040323   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 23:32:19.046077   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 23:32:19.051724   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 23:32:19.058061   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 23:32:19.063978   36963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 23:32:19.070033   36963 kubeadm.go:392] StartCluster: {Name:ha-175414 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-175414 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.32 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:32:19.070136   36963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 23:32:19.070173   36963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 23:32:19.109474   36963 cri.go:89] found id: "453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99"
	I0815 23:32:19.109493   36963 cri.go:89] found id: "8ff057f6573bd4d735de692c58a6a38952a75f3f18bc080cc400737049a6e7da"
	I0815 23:32:19.109497   36963 cri.go:89] found id: "5be37cafbe7f3c97cd0ffe329036589d4a99bdd61f07075c5cec580dc4f0f678"
	I0815 23:32:19.109500   36963 cri.go:89] found id: "61a664a258c6badb719a5d06b0dddbb21dabcd05c5104e75aa2f6ba91e819d98"
	I0815 23:32:19.109502   36963 cri.go:89] found id: "d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93"
	I0815 23:32:19.109505   36963 cri.go:89] found id: "6bdc1076f0d1144cfe42a2915eb527e93050b3816630ad9a61f849f0db08fb64"
	I0815 23:32:19.109508   36963 cri.go:89] found id: "fd145e0bce0eb84f0b1faee11e60728bc4fca62280dd72e88596ede9aaac687e"
	I0815 23:32:19.109510   36963 cri.go:89] found id: "dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a"
	I0815 23:32:19.109512   36963 cri.go:89] found id: "70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137"
	I0815 23:32:19.109519   36963 cri.go:89] found id: "41980bfc0d44adc634f2f6ae3e9e278b6554385821c8a31946031727e434de55"
	I0815 23:32:19.109521   36963 cri.go:89] found id: "aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391"
	I0815 23:32:19.109534   36963 cri.go:89] found id: "af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755"
	I0815 23:32:19.109537   36963 cri.go:89] found id: "b61812e4ed00f24c486f8605914aff96e3dfd21370bdafa90e8a25b72e72ceb8"
	I0815 23:32:19.109539   36963 cri.go:89] found id: "0f0f5c055e67f525bb9ab071decbc02aa27ed220214653ed7246b3b41f6e5fd0"
	I0815 23:32:19.109543   36963 cri.go:89] found id: ""
	I0815 23:32:19.109583   36963 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.728187395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723764778856533533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
8d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa
28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723764743660487260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60f
f5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723764743589026191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99,PodSandboxId:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764727427096623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kub
ernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723764234620157579,Labels:map[string]stri
ng{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764100474788152,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723764088513493764,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723764086149001826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723764074424409898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723764074344897958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5ecc30d-1a0c-45a0-9f35-f0c08221e140 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.755919706Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1533d388-c810-49a3-895b-8356232d9b6c name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.756330965Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-ztvms,Uid:68404862-5be0-4c89-8a76-4eb9f9dc682b,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764777007843292,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:23:51.809332415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-175414,Uid:fc5eb109d09f5a9c4baa9f95d5646cfd,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1723764754579172527,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{kubernetes.io/config.hash: fc5eb109d09f5a9c4baa9f95d5646cfd,kubernetes.io/config.seen: 2024-08-15T23:32:18.245789713Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-zrv4c,Uid:97d399d0-871e-4e59-8c4d-093b5a29a107,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1723764743304808195,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08
-15T23:21:39.845001584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-vkm5s,Uid:1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743294535926,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:39.850481660Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&PodSandboxMetadata{Name:etcd-ha-175414,Uid:88d31a53d81e2448a936fab3b5f0449d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743285949752,Labels:map[string]string{componen
t: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.67:2379,kubernetes.io/config.hash: 88d31a53d81e2448a936fab3b5f0449d,kubernetes.io/config.seen: 2024-08-15T23:21:20.809887022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&PodSandboxMetadata{Name:kindnet-jjcdm,Uid:534a226d-c0b6-4a2f-8b2c-27921c9e1aca,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743280558095,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,k8s-app: kindnet,pod-template-generation: 1,
tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:25.050451541Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7042d764-6043-449c-a1e9-aaa28256c579,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743274574230,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"}
,\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T23:21:39.851222458Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-175414,Uid:02dd932293ae8c928398fa28db141a52,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743266354204,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,tier
: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 02dd932293ae8c928398fa28db141a52,kubernetes.io/config.seen: 2024-08-15T23:21:20.809884588Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-175414,Uid:791e1ef83a25ef60ff5fe0211ab052ac,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743264542658,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 791e1ef83a25ef60ff5fe0211ab052ac,kubernetes.io/config.seen: 2024-08-15T23:21:20.809883037Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4716878078ff8e0ec331b9fce712691476c897f9d38b88
f87f02ba0003f849e,Metadata:&PodSandboxMetadata{Name:kube-proxy-4frcn,Uid:2831334a-a379-4f6d-ada3-53a01fc6f65e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743245191467,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:25.055760598Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-175414,Uid:6c3f4194728ec576cf8056e92c6671ad,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743242334836,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.67:8443,kubernetes.io/config.hash: 6c3f4194728ec576cf8056e92c6671ad,kubernetes.io/config.seen: 2024-08-15T23:21:20.809877864Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-zrv4c,Uid:97d399d0-871e-4e59-8c4d-093b5a29a107,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1723764727270341106,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:39.845001584Z,kubernetes.io
/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-ztvms,Uid:68404862-5be0-4c89-8a76-4eb9f9dc682b,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723764233623404001,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:23:51.809332415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-vkm5s,Uid:1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723764100173956430,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:39.850481660Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&PodSandboxMetadata{Name:kube-proxy-4frcn,Uid:2831334a-a379-4f6d-ada3-53a01fc6f65e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723764085983499222,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:25.055760598Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&PodSandboxMetadata{Name:kindnet-jjcdm,Uid:534a226d-c0b6-4a2f-8b2c-27921c9e1aca,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723764085981415008,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:25.050451541Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-175414,Uid:02dd932293ae8c928398fa28db141a52,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723764074107581147,Labels:map[string]string{component: kube-scheduler,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 02dd932293ae8c928398fa28db141a52,kubernetes.io/config.seen: 2024-08-15T23:21:13.636164281Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&PodSandboxMetadata{Name:etcd-ha-175414,Uid:88d31a53d81e2448a936fab3b5f0449d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723764074103556415,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.67:2379,kubernetes.io/config.hash: 88d31a53d81
e2448a936fab3b5f0449d,kubernetes.io/config.seen: 2024-08-15T23:21:13.636157482Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1533d388-c810-49a3-895b-8356232d9b6c name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.757096289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5f2587b-328c-4050-9bc6-6a8ecc29eaa1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.757166335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5f2587b-328c-4050-9bc6-6a8ecc29eaa1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.757626042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723764778856533533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
8d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa
28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723764743660487260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60f
f5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723764743589026191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99,PodSandboxId:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764727427096623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kub
ernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723764234620157579,Labels:map[string]stri
ng{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764100474788152,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723764088513493764,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723764086149001826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723764074424409898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723764074344897958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5f2587b-328c-4050-9bc6-6a8ecc29eaa1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.780420894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec652894-75a7-4c94-bba6-97074b8144a2 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.780552478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec652894-75a7-4c94-bba6-97074b8144a2 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.782845830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=539df230-802f-45c5-bb4d-2126f37cf97d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.783344156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765048783320188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=539df230-802f-45c5-bb4d-2126f37cf97d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.784348804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0db3c3de-5911-4e4d-ba32-112d67841b90 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.784408072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0db3c3de-5911-4e4d-ba32-112d67841b90 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.785002769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723764778856533533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
8d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa
28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723764743660487260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60f
f5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723764743589026191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99,PodSandboxId:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764727427096623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kub
ernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723764234620157579,Labels:map[string]stri
ng{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764100474788152,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723764088513493764,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723764086149001826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723764074424409898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723764074344897958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0db3c3de-5911-4e4d-ba32-112d67841b90 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.837104817Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69d3d6db-36ba-4206-b73a-fa79f4bbcb93 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.837222665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69d3d6db-36ba-4206-b73a-fa79f4bbcb93 name=/runtime.v1.RuntimeService/Version
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.841732030Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad0c682d-1e5e-4f0a-836c-1d5f228d7b0f name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.842108505Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-ztvms,Uid:68404862-5be0-4c89-8a76-4eb9f9dc682b,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764777007843292,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:23:51.809332415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-175414,Uid:fc5eb109d09f5a9c4baa9f95d5646cfd,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1723764754579172527,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{kubernetes.io/config.hash: fc5eb109d09f5a9c4baa9f95d5646cfd,kubernetes.io/config.seen: 2024-08-15T23:32:18.245789713Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-zrv4c,Uid:97d399d0-871e-4e59-8c4d-093b5a29a107,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1723764743304808195,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08
-15T23:21:39.845001584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-vkm5s,Uid:1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743294535926,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:39.850481660Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&PodSandboxMetadata{Name:etcd-ha-175414,Uid:88d31a53d81e2448a936fab3b5f0449d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743285949752,Labels:map[string]string{componen
t: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.67:2379,kubernetes.io/config.hash: 88d31a53d81e2448a936fab3b5f0449d,kubernetes.io/config.seen: 2024-08-15T23:21:20.809887022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&PodSandboxMetadata{Name:kindnet-jjcdm,Uid:534a226d-c0b6-4a2f-8b2c-27921c9e1aca,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743280558095,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,k8s-app: kindnet,pod-template-generation: 1,
tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:25.050451541Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7042d764-6043-449c-a1e9-aaa28256c579,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743274574230,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"}
,\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T23:21:39.851222458Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-175414,Uid:02dd932293ae8c928398fa28db141a52,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743266354204,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,tier
: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 02dd932293ae8c928398fa28db141a52,kubernetes.io/config.seen: 2024-08-15T23:21:20.809884588Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-175414,Uid:791e1ef83a25ef60ff5fe0211ab052ac,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743264542658,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 791e1ef83a25ef60ff5fe0211ab052ac,kubernetes.io/config.seen: 2024-08-15T23:21:20.809883037Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4716878078ff8e0ec331b9fce712691476c897f9d38b88
f87f02ba0003f849e,Metadata:&PodSandboxMetadata{Name:kube-proxy-4frcn,Uid:2831334a-a379-4f6d-ada3-53a01fc6f65e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743245191467,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:21:25.055760598Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-175414,Uid:6c3f4194728ec576cf8056e92c6671ad,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723764743242334836,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.67:8443,kubernetes.io/config.hash: 6c3f4194728ec576cf8056e92c6671ad,kubernetes.io/config.seen: 2024-08-15T23:21:20.809877864Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ad0c682d-1e5e-4f0a-836c-1d5f228d7b0f name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.842628065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=778c92ab-9693-44d6-bacb-7c158dd1d689 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.843333598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765048843219150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=778c92ab-9693-44d6-bacb-7c158dd1d689 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.843568127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65f63198-7716-4a96-a072-0dca10deb0b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.843636180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65f63198-7716-4a96-a072-0dca10deb0b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.844031542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78
ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd93229
3ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65f63198-7716-4a96-a072-0dca10deb0b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.845057829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=463849b6-eb27-45f9-9254-faad89be3958 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.845644179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=463849b6-eb27-45f9-9254-faad89be3958 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 23:37:28 ha-175414 crio[3699]: time="2024-08-15 23:37:28.846169572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:91be7363b3925d4c4e5997a4643efcf6be92524d7bdc7cdd78ec3e7f8d61d329,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723764833861005579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723764788863614703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60ff5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723764787851141925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31267b48719346c2570c7dd7e71d8daefd6b6e0afd5a219d2c9c91fbf03835fb,PodSandboxId:9aa34875f76cf08511a1b40e99585717dbd42c826f7917374aac23ec96ad2e70,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723764778856533533,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7042d764-6043-449c-a1e9-aaa28256c579,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b5e61456c820568a14a7e3b41f5d838357e424299ab8f52aa88d2133af83ac,PodSandboxId:3782c37a72b34e50a496c8351ddd79a54eaace5e814c15c221524bd739d5b0c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723764777158867822,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09cf1043a0abee0ecf8227331084602bc4610657a40df0ad3bcc20ec14275259,PodSandboxId:d66c19a5c116d9279352dd82a7bc4a30e6506406478fc109bba4f8ba793f4044,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723764754674944654,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc5eb109d09f5a9c4baa9f95d5646cfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362,PodSandboxId:6c5918c0042cb65dc8ffc45923e7e816c7febf2f8b3924c8cc3d41fa69f14938,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764750163623247,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd,PodSandboxId:e4716878078ff8e0ec331b9fce712691476c897f9d38b88f87f02ba0003f849e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723764744855706654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b,PodSandboxId:3ba3e04d84149674e0985720df15974d371d63969b0808d301dd2bad4114d008,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723764743968148849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429,PodSandboxId:7fa869b54d0fc9a2664c4b3dcf1a14f625c12705c2c19805056a50afb23d54f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723764743935340077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082,PodSandboxId:1f58063048db7c94dd4c90adc52d06b863b6bca4d4243efb40ff95799b749dc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723764743812426484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
8d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1,PodSandboxId:3359df4c20b285743796920bef05d018163c6f43737e729938ad77948e48ca46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723764743727237860,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa
28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9,PodSandboxId:daa8c968b6f120332db1945c9f7f05427e44f36058567814ad6c87ff9f8a063c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723764743660487260,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791e1ef83a25ef60f
f5fe0211ab052ac,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06,PodSandboxId:30a091962cf5ce7da76e083dac02d116100d460cbf09be55ff52bcf40fc776c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723764743589026191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c3f4194728ec576cf8056e92c6671ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99,PodSandboxId:0e7cbb8b2f807a28bf3efd56ecb4c990dc8c1c994f6aa3ebbbd3c203add6cbb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764727427096623,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zrv4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d399d0-871e-4e59-8c4d-093b5a29a107,},Annotations:map[string]string{io.kub
ernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f2ac1a3791a20a1625738a0df22be414fe02c050d816d4dc970cc70168fe77,PodSandboxId:1555ba5313b4a769fb6f2211c39fdc7aa299a1856e3b465d8d7681fa2f8fa2d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723764234620157579,Labels:map[string]stri
ng{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ztvms,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 68404862-5be0-4c89-8a76-4eb9f9dc682b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93,PodSandboxId:33df4c1e88a573c8d2286a36253735f996b35fd7ab2d905fb2793f9078df826d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723764100474788152,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-vkm5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce51b47-6ac6-4bee-9ec7-6780ea1ea60c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a,PodSandboxId:1392391da1090cc908b4d799a655026ec1ce0b69efd4420fbf922ad5944d5b3f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723764088513493764,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jjcdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534a226d-c0b6-4a2f-8b2c-27921c9e1aca,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137,PodSandboxId:51e2286f4b6df28e214d0e165e4f6175cebcad94f0203df12be1bf420f7e5d30,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723764086149001826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4frcn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2831334a-a379-4f6d-ada3-53a01fc6f65e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391,PodSandboxId:94e761b5a2dbfd359d05eb8509686a17259e92178f662b7a0d684cf3326869f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723764074424409898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d31a53d81e2448a936fab3b5f0449d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755,PodSandboxId:6bc6e4c03eedb785dbae467b30afa0feedb0e2cbfa51fb8cad53dd5afd4d27bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723764074344897958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-175414,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dd932293ae8c928398fa28db141a52,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=463849b6-eb27-45f9-9254-faad89be3958 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	91be7363b3925       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   9aa34875f76cf       storage-provisioner
	3db7adbcee13c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   daa8c968b6f12       kube-controller-manager-ha-175414
	82da16254ec56       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   30a091962cf5c       kube-apiserver-ha-175414
	31267b4871934       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   9aa34875f76cf       storage-provisioner
	e2b5e61456c82       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   3782c37a72b34       busybox-7dff88458-ztvms
	09cf1043a0abe       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   d66c19a5c116d       kube-vip-ha-175414
	0a0b43b81fbcb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   6c5918c0042cb       coredns-6f6b679f8f-zrv4c
	602292b2cbfa5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   e4716878078ff       kube-proxy-4frcn
	7ff4093cdbbdd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   3ba3e04d84149       kindnet-jjcdm
	a08812575d2b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   7fa869b54d0fc       coredns-6f6b679f8f-vkm5s
	34369a9e60b2d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   1f58063048db7       etcd-ha-175414
	55966e7435723       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   3359df4c20b28       kube-scheduler-ha-175414
	f8c3019e323c6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   daa8c968b6f12       kube-controller-manager-ha-175414
	e1edfb586686e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   30a091962cf5c       kube-apiserver-ha-175414
	453ec763ed5d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Exited              coredns                   1                   0e7cbb8b2f807       coredns-6f6b679f8f-zrv4c
	e6f2ac1a3791a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   1555ba5313b4a       busybox-7dff88458-ztvms
	d266fdeedd2d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   33df4c1e88a57       coredns-6f6b679f8f-vkm5s
	dce83cbb20557       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   1392391da1090       kindnet-jjcdm
	70eb25dbc5fac       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   51e2286f4b6df       kube-proxy-4frcn
	aaba7057e0920       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   94e761b5a2dbf       etcd-ha-175414
	af5abf6569d1f       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   6bc6e4c03eedb       kube-scheduler-ha-175414
	
	
	==> coredns [0a0b43b81fbcbade3277e7762e20fd48833ccfa2abfb0885e0eca1efbf15a362] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:48882->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:48882->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:48902->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:48902->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [453ec763ed5d19afe23bb38311444db0b599eaa612addfed6d52b7eece753f99] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:32932 - 16878 "HINFO IN 2839216306064695090.8854576555639446388. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011032557s
	
	
	==> coredns [a08812575d2b128e041d6ededb312becbb70e71f0e6b53f2a4f934966af52429] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42502->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[741196442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 23:32:38.883) (total time: 10151ms):
	Trace[741196442]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42502->10.96.0.1:443: read: connection reset by peer 10151ms (23:32:49.034)
	Trace[741196442]: [10.151670777s] [10.151670777s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42502->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42512->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:42512->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d266fdeedd2d106370d908441f5847a93e212f4ea203dbeb7405fc75736bfb93] <==
	[INFO] 10.244.0.4:59435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073012s
	[INFO] 10.244.2.2:60026 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235829s
	[INFO] 10.244.2.2:58530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018432s
	[INFO] 10.244.1.2:44913 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119773s
	[INFO] 10.244.1.2:52756 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123167s
	[INFO] 10.244.0.4:39480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124675s
	[INFO] 10.244.0.4:51365 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114789s
	[INFO] 10.244.0.4:49967 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000068329s
	[INFO] 10.244.0.4:42637 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073642s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1900&timeout=8m53s&timeoutSeconds=533&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1900&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1900": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-175414
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T23_21_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:21:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:37:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:33:09 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:33:09 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:33:09 +0000   Thu, 15 Aug 2024 23:21:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:33:09 +0000   Thu, 15 Aug 2024 23:21:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-175414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b0ddee9ca5943d7802a25ee6a9c7f34
	  System UUID:                7b0ddee9-ca59-43d7-802a-25ee6a9c7f34
	  Boot ID:                    a257efb5-ad21-419a-b259-592d48073d80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ztvms              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-vkm5s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-zrv4c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-175414                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-jjcdm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-175414             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-175414    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-4frcn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-175414             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-175414                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 4m21s                 kube-proxy       
	  Normal   Starting                 16m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)     kubelet          Node ha-175414 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)     kubelet          Node ha-175414 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)     kubelet          Node ha-175414 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                   kubelet          Node ha-175414 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                   kubelet          Node ha-175414 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                   kubelet          Node ha-175414 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                   node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal   NodeReady                15m                   kubelet          Node ha-175414 status is now: NodeReady
	  Normal   RegisteredNode           15m                   node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal   RegisteredNode           13m                   node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Warning  ContainerGCFailed        6m9s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m9s (x4 over 6m23s)  kubelet          Node ha-175414 status is now: NodeNotReady
	  Normal   RegisteredNode           4m25s                 node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal   RegisteredNode           4m15s                 node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	  Normal   RegisteredNode           3m18s                 node-controller  Node ha-175414 event: Registered Node ha-175414 in Controller
	
	
	Name:               ha-175414-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_22_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:22:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:37:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:33:51 +0000   Thu, 15 Aug 2024 23:33:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:33:51 +0000   Thu, 15 Aug 2024 23:33:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:33:51 +0000   Thu, 15 Aug 2024 23:33:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:33:51 +0000   Thu, 15 Aug 2024 23:33:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-175414-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e48881ea1334f28a03d47bf7b09ff84
	  System UUID:                1e48881e-a133-4f28-a03d-47bf7b09ff84
	  Boot ID:                    eec79460-aaa8-401d-a650-94c3fb86c560
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kt8v4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-175414-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-47nts                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-175414-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-175414-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-dcnmc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-175414-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-175414-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m                     kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-175414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-175414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-175414-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-175414-m02 status is now: NodeNotReady
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node ha-175414-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node ha-175414-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node ha-175414-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-175414-m02 event: Registered Node ha-175414-m02 in Controller
	
	
	Name:               ha-175414-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-175414-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=ha-175414
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_24_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:24:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-175414-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 23:35:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 23:34:42 +0000   Thu, 15 Aug 2024 23:35:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 23:34:42 +0000   Thu, 15 Aug 2024 23:35:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 23:34:42 +0000   Thu, 15 Aug 2024 23:35:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 23:34:42 +0000   Thu, 15 Aug 2024 23:35:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    ha-175414-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4da843156b4c43e0a4311c72833aae78
	  System UUID:                4da84315-6b4c-43e0-a431-1c72833aae78
	  Boot ID:                    774e1017-4917-4afa-9c43-9b106cb79caa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xzgkp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-6bf4q              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-jm5fj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 13m)      kubelet          Node ha-175414-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 13m)      kubelet          Node ha-175414-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 13m)      kubelet          Node ha-175414-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-175414-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m25s                  node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   NodeNotReady             3m45s                  node-controller  Node ha-175414-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m18s                  node-controller  Node ha-175414-m04 event: Registered Node ha-175414-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m47s (x2 over 2m47s)  kubelet          Node ha-175414-m04 has been rebooted, boot id: 774e1017-4917-4afa-9c43-9b106cb79caa
	  Normal   NodeHasSufficientMemory  2m47s (x3 over 2m47s)  kubelet          Node ha-175414-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x3 over 2m47s)  kubelet          Node ha-175414-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x3 over 2m47s)  kubelet          Node ha-175414-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m47s                  kubelet          Node ha-175414-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m47s                  kubelet          Node ha-175414-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-175414-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056390] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050948] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.198639] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.119702] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.271672] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.126980] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.023155] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.059629] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.252555] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.087359] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.483452] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.149794] kauditd_printk_skb: 38 callbacks suppressed
	[Aug15 23:22] kauditd_printk_skb: 26 callbacks suppressed
	[Aug15 23:32] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	[  +0.163710] systemd-fstab-generator[3538]: Ignoring "noauto" option for root device
	[  +0.277025] systemd-fstab-generator[3633]: Ignoring "noauto" option for root device
	[  +0.145791] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.323149] systemd-fstab-generator[3684]: Ignoring "noauto" option for root device
	[ +10.307759] systemd-fstab-generator[3809]: Ignoring "noauto" option for root device
	[  +0.086388] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.037089] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.778546] kauditd_printk_skb: 73 callbacks suppressed
	[ +15.260805] kauditd_printk_skb: 5 callbacks suppressed
	[Aug15 23:33] kauditd_printk_skb: 5 callbacks suppressed
	[ +18.736514] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [34369a9e60b2df64a4003619669a656300878d57bab81b79d2a4102ebc560082] <==
	{"level":"info","ts":"2024-08-15T23:34:03.058338Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:03.058383Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:03.071389Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ce564ad586a3115","to":"a244d22cbced21a4","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T23:34:03.071455Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:03.077655Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ce564ad586a3115","to":"a244d22cbced21a4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T23:34:03.077748Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:03.151329Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:55.818715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 switched to configuration voters=(929259593797349653 2841589207998042218)"}
	{"level":"info","ts":"2024-08-15T23:34:55.825349Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","removed-remote-peer-id":"a244d22cbced21a4","removed-remote-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-08-15T23:34:55.825507Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a244d22cbced21a4"}
	{"level":"warn","ts":"2024-08-15T23:34:55.826202Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:55.826341Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a244d22cbced21a4"}
	{"level":"warn","ts":"2024-08-15T23:34:55.840746Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:55.840804Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:55.840851Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"warn","ts":"2024-08-15T23:34:55.841048Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4","error":"context canceled"}
	{"level":"warn","ts":"2024-08-15T23:34:55.841092Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"a244d22cbced21a4","error":"failed to read a244d22cbced21a4 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-15T23:34:55.841120Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"warn","ts":"2024-08-15T23:34:55.841326Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4","error":"context canceled"}
	{"level":"info","ts":"2024-08-15T23:34:55.841370Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:55.841383Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:55.841395Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ce564ad586a3115","removed-remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:34:55.841446Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"ce564ad586a3115","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"a244d22cbced21a4"}
	{"level":"warn","ts":"2024-08-15T23:34:55.855730Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ce564ad586a3115","remote-peer-id-stream-handler":"ce564ad586a3115","remote-peer-id-from":"a244d22cbced21a4"}
	{"level":"warn","ts":"2024-08-15T23:34:55.857566Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.100:50952","server-name":"","error":"read tcp 192.168.39.67:2380->192.168.39.100:50952: read: connection reset by peer"}
	
	
	==> etcd [aaba7057e0920ac1a8bf329a11c256119620b7169c45d1cc63ccacd6216b6391] <==
	2024/08/15 23:30:35 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/15 23:30:35 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T23:30:36.038879Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:30:36.039388Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T23:30:36.040667Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ce564ad586a3115","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T23:30:36.040807Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.040841Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.040864Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.040921Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.040987Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.041037Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.041048Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"276f5a544c4e906a"}
	{"level":"info","ts":"2024-08-15T23:30:36.041053Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041066Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041087Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041171Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041197Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041224Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ce564ad586a3115","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.041307Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a244d22cbced21a4"}
	{"level":"info","ts":"2024-08-15T23:30:36.044204Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"warn","ts":"2024-08-15T23:30:36.044366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.367147984s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-15T23:30:36.044406Z","caller":"traceutil/trace.go:171","msg":"trace[713965431] range","detail":"{range_begin:; range_end:; }","duration":"2.367201695s","start":"2024-08-15T23:30:33.677197Z","end":"2024-08-15T23:30:36.044398Z","steps":["trace[713965431] 'agreement among raft nodes before linearized reading'  (duration: 2.3671438s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T23:30:36.044435Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T23:30:36.045176Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-15T23:30:36.045199Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-175414","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	
	
	==> kernel <==
	 23:37:29 up 16 min,  0 users,  load average: 0.18, 0.30, 0.25
	Linux ha-175414 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7ff4093cdbbdd1a9a025f814a037e59f7e005a64c5869f2393b7d58bb236279b] <==
	I0815 23:36:45.191953       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:36:55.189542       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:36:55.189621       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:36:55.189791       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:36:55.189817       1 main.go:299] handling current node
	I0815 23:36:55.189829       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:36:55.189833       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:37:05.194759       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:37:05.194909       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:37:05.195319       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:37:05.195385       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:37:05.195495       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:37:05.195529       1 main.go:299] handling current node
	I0815 23:37:15.194350       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:37:15.194414       1 main.go:299] handling current node
	I0815 23:37:15.194449       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:37:15.194455       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:37:15.194641       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:37:15.194672       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:37:25.187674       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:37:25.187796       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:37:25.187964       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:37:25.187989       1 main.go:299] handling current node
	I0815 23:37:25.188015       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:37:25.188042       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [dce83cbb2055723a26c5893b60f22e6bc43f5857116ffb0cc56240518a24889a] <==
	I0815 23:29:59.559131       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:30:09.558899       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:30:09.558951       1 main.go:299] handling current node
	I0815 23:30:09.558966       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:30:09.558971       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:30:09.559118       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:30:09.559139       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:30:09.559210       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:30:09.559215       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:30:19.560237       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:30:19.560350       1 main.go:299] handling current node
	I0815 23:30:19.560370       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:30:19.560375       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	I0815 23:30:19.560526       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:30:19.560550       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:30:19.560617       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:30:19.560636       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:30:29.559331       1 main.go:295] Handling node with IPs: map[192.168.39.100:{}]
	I0815 23:30:29.559501       1 main.go:322] Node ha-175414-m03 has CIDR [10.244.2.0/24] 
	I0815 23:30:29.559728       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0815 23:30:29.559757       1 main.go:322] Node ha-175414-m04 has CIDR [10.244.3.0/24] 
	I0815 23:30:29.559830       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0815 23:30:29.559849       1 main.go:299] handling current node
	I0815 23:30:29.559873       1 main.go:295] Handling node with IPs: map[192.168.39.19:{}]
	I0815 23:30:29.559902       1 main.go:322] Node ha-175414-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [82da16254ec56d2ae4f43047e7513f91a8579884203307b0e8704cbe21e5a0e0] <==
	I0815 23:33:10.716163       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0815 23:33:10.784770       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 23:33:10.801327       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 23:33:10.801365       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:33:10.801483       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 23:33:10.801520       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 23:33:10.803963       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 23:33:10.804139       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 23:33:10.804140       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:33:10.810352       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:33:10.810350       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:33:10.810404       1 policy_source.go:224] refreshing policies
	I0815 23:33:10.815867       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:33:10.815988       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:33:10.816039       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:33:10.816070       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:33:10.816099       1 cache.go:39] Caches are synced for autoregister controller
	W0815 23:33:10.818656       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.19]
	I0815 23:33:10.819865       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 23:33:10.831149       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 23:33:10.835676       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 23:33:10.884798       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:33:11.707541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 23:33:12.049902       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.19 192.168.39.67]
	W0815 23:35:12.058337       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19 192.168.39.67]
	
	
	==> kube-apiserver [e1edfb586686ef330cdd7ccca0ea6e9259fd1eb0b767e47936b5aa27df660b06] <==
	I0815 23:32:24.055831       1 options.go:228] external host was not specified, using 192.168.39.67
	I0815 23:32:24.074500       1 server.go:142] Version: v1.31.0
	I0815 23:32:24.074605       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:32:25.183105       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 23:32:25.200551       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 23:32:25.200591       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 23:32:25.200796       1 instance.go:232] Using reconciler: lease
	I0815 23:32:25.202141       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0815 23:32:45.182078       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 23:32:45.182077       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0815 23:32:45.202241       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [3db7adbcee13c464d51080772d578613f99930e5619855c96cfe3d656df0c230] <==
	I0815 23:35:44.179498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:35:44.205418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:35:44.274004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.385405ms"
	I0815 23:35:44.275510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.817µs"
	I0815 23:35:44.617533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	I0815 23:35:49.317424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-175414-m04"
	E0815 23:35:54.091365       1 gc_controller.go:151] "Failed to get node" err="node \"ha-175414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-175414-m03"
	E0815 23:35:54.091498       1 gc_controller.go:151] "Failed to get node" err="node \"ha-175414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-175414-m03"
	E0815 23:35:54.091524       1 gc_controller.go:151] "Failed to get node" err="node \"ha-175414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-175414-m03"
	E0815 23:35:54.091549       1 gc_controller.go:151] "Failed to get node" err="node \"ha-175414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-175414-m03"
	E0815 23:35:54.091572       1 gc_controller.go:151] "Failed to get node" err="node \"ha-175414-m03\" not found" logger="pod-garbage-collector-controller" node="ha-175414-m03"
	I0815 23:35:54.103849       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-175414-m03"
	I0815 23:35:54.136870       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-175414-m03"
	I0815 23:35:54.137045       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-175414-m03"
	I0815 23:35:54.168240       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-175414-m03"
	I0815 23:35:54.168444       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-175414-m03"
	I0815 23:35:54.208972       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-175414-m03"
	I0815 23:35:54.209232       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fp2gc"
	I0815 23:35:54.242622       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fp2gc"
	I0815 23:35:54.242852       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-175414-m03"
	I0815 23:35:54.279524       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-175414-m03"
	I0815 23:35:54.279660       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-175414-m03"
	I0815 23:35:54.325220       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-175414-m03"
	I0815 23:35:54.325354       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-qtps7"
	I0815 23:35:54.380229       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-qtps7"
	
	
	==> kube-controller-manager [f8c3019e323c665a3d031120e58a806f271c738f75a4af5af7f7628e262110f9] <==
	I0815 23:32:24.764503       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:32:25.559011       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 23:32:25.559103       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:32:25.561036       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:32:25.561202       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:32:25.561797       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:32:25.561732       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0815 23:32:46.208590       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.67:8443/healthz\": dial tcp 192.168.39.67:8443: connect: connection refused"
	
	
	==> kube-proxy [602292b2cbfa562e5c0a7565041f75f2b7e9266b7a721e4a9e042c40385ffcfd] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:32:28.426908       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 23:32:31.499383       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 23:32:34.570810       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 23:32:40.715090       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 23:32:49.932365       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-175414\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0815 23:33:07.740814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E0815 23:33:07.740995       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:33:07.786870       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:33:07.786978       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:33:07.787024       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:33:07.790460       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:33:07.790870       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:33:07.790924       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:33:07.792801       1 config.go:197] "Starting service config controller"
	I0815 23:33:07.792874       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:33:07.792914       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:33:07.792942       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:33:07.793619       1 config.go:326] "Starting node config controller"
	I0815 23:33:07.793695       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:33:07.895395       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:33:07.895505       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:33:07.895592       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [70eb25dbc5face8015006cafec68e934a4668ffff5a239ab75e396eeeed22137] <==
	E0815 23:29:24.106913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:24.107313       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:24.107383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:27.178725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:27.178849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:30.252428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:30.252655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:30.251409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:30.253305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:39.467585       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:39.467841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:42.540650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:42.540855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:45.612535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:45.612809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:57.902359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:57.902437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:29:57.903369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:29:57.903527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:30:04.043786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:30:04.043888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1882\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:30:28.619962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:30:28.620139       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1847\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 23:30:28.620301       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 23:30:28.620360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-175414&resourceVersion=1879\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [55966e74357231172fa1cf8eca532b615d6b7b6508d4171efb6e6215c78635b1] <==
	W0815 23:33:01.810031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.67:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:01.810231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.67:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:02.322323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.67:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:02.322368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.67:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:02.424034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.67:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:02.424097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.67:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:02.610496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.67:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:02.610563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.67:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:03.117431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.67:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:03.117505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.67:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:03.371890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.67:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:03.372010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.67:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:03.450601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.67:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:03.450722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.67:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:06.059969       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.67:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:06.060088       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.67:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:07.286368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.67:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:07.286534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.67:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	W0815 23:33:07.600069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.67:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	E0815 23:33:07.600193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.67:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8443: connect: connection refused" logger="UnhandledError"
	I0815 23:33:19.518338       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:34:52.533667       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xzgkp\": pod busybox-7dff88458-xzgkp is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xzgkp" node="ha-175414-m04"
	E0815 23:34:52.533796       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b3ddb0be-f1b1-4aba-bdc5-5a549828f19b(default/busybox-7dff88458-xzgkp) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xzgkp"
	E0815 23:34:52.533839       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xzgkp\": pod busybox-7dff88458-xzgkp is already assigned to node \"ha-175414-m04\"" pod="default/busybox-7dff88458-xzgkp"
	I0815 23:34:52.533858       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xzgkp" node="ha-175414-m04"
	
	
	==> kube-scheduler [af5abf6569d1fdf303cf0a1c8c069b2dbbe833064ca92a59e911f018a8e50755] <==
	E0815 23:24:31.009629       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m6wl5\": pod kindnet-m6wl5 is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m6wl5" node="ha-175414-m04"
	E0815 23:24:31.009730       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod efa64311-983a-46d2-88b4-306fc316f564(kube-system/kindnet-m6wl5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-m6wl5"
	E0815 23:24:31.009767       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m6wl5\": pod kindnet-m6wl5 is already assigned to node \"ha-175414-m04\"" pod="kube-system/kindnet-m6wl5"
	I0815 23:24:31.009797       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m6wl5" node="ha-175414-m04"
	E0815 23:24:31.089615       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w68mv\": pod kube-proxy-w68mv is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w68mv" node="ha-175414-m04"
	E0815 23:24:31.093322       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8dece2a7-e846-45c9-81a2-a5766b3e2a59(kube-system/kube-proxy-w68mv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w68mv"
	E0815 23:24:31.093536       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w68mv\": pod kube-proxy-w68mv is already assigned to node \"ha-175414-m04\"" pod="kube-system/kube-proxy-w68mv"
	I0815 23:24:31.093743       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w68mv" node="ha-175414-m04"
	E0815 23:24:31.092964       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-442dg\": pod kindnet-442dg is already assigned to node \"ha-175414-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-442dg" node="ha-175414-m04"
	E0815 23:24:31.099497       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a7abeee9-7619-4535-9654-3a395026f469(kube-system/kindnet-442dg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-442dg"
	E0815 23:24:31.099565       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-442dg\": pod kindnet-442dg is already assigned to node \"ha-175414-m04\"" pod="kube-system/kindnet-442dg"
	I0815 23:24:31.099706       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-442dg" node="ha-175414-m04"
	E0815 23:30:27.195115       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0815 23:30:27.360447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0815 23:30:28.865167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0815 23:30:30.762656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0815 23:30:32.113696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0815 23:30:32.185637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0815 23:30:33.097130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0815 23:30:33.493938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0815 23:30:33.873939       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0815 23:30:34.643051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0815 23:30:35.429671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	I0815 23:30:35.738821       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0815 23:30:35.738936       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 15 23:36:11 ha-175414 kubelet[1322]: E0815 23:36:11.134463    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764971132722765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:36:20 ha-175414 kubelet[1322]: E0815 23:36:20.858574    1322 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:36:20 ha-175414 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:36:20 ha-175414 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:36:20 ha-175414 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:36:20 ha-175414 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:36:21 ha-175414 kubelet[1322]: E0815 23:36:21.136892    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764981136087493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:36:21 ha-175414 kubelet[1322]: E0815 23:36:21.136986    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764981136087493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:36:31 ha-175414 kubelet[1322]: E0815 23:36:31.138996    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764991138194994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:36:31 ha-175414 kubelet[1322]: E0815 23:36:31.139229    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723764991138194994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:36:41 ha-175414 kubelet[1322]: E0815 23:36:41.142556    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765001142156677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:36:41 ha-175414 kubelet[1322]: E0815 23:36:41.142613    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765001142156677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:36:51 ha-175414 kubelet[1322]: E0815 23:36:51.144984    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765011144424717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:36:51 ha-175414 kubelet[1322]: E0815 23:36:51.145033    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765011144424717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:37:01 ha-175414 kubelet[1322]: E0815 23:37:01.148822    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765021146946847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:37:01 ha-175414 kubelet[1322]: E0815 23:37:01.149236    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765021146946847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:37:11 ha-175414 kubelet[1322]: E0815 23:37:11.154695    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765031152203406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:37:11 ha-175414 kubelet[1322]: E0815 23:37:11.155194    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765031152203406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:37:20 ha-175414 kubelet[1322]: E0815 23:37:20.859692    1322 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:37:20 ha-175414 kubelet[1322]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:37:20 ha-175414 kubelet[1322]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:37:20 ha-175414 kubelet[1322]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:37:20 ha-175414 kubelet[1322]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:37:21 ha-175414 kubelet[1322]: E0815 23:37:21.160478    1322 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765041159633207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:37:21 ha-175414 kubelet[1322]: E0815 23:37:21.160524    1322 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723765041159633207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 23:37:28.356525   39349 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19452-12919/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-175414 -n ha-175414
helpers_test.go:261: (dbg) Run:  kubectl --context ha-175414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-145108
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-145108
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-145108: exit status 82 (2m1.83909114s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-145108-m03"  ...
	* Stopping node "multinode-145108-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-145108" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-145108 --wait=true -v=8 --alsologtostderr
E0815 23:57:51.160737   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:57:56.865259   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:59:53.801364   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-145108 --wait=true -v=8 --alsologtostderr: (3m20.252813655s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-145108
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-145108 -n multinode-145108
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-145108 logs -n 25: (1.497751846s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m02:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1410064125/001/cp-test_multinode-145108-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m02:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108:/home/docker/cp-test_multinode-145108-m02_multinode-145108.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n multinode-145108 sudo cat                                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /home/docker/cp-test_multinode-145108-m02_multinode-145108.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m02:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03:/home/docker/cp-test_multinode-145108-m02_multinode-145108-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n multinode-145108-m03 sudo cat                                   | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /home/docker/cp-test_multinode-145108-m02_multinode-145108-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp testdata/cp-test.txt                                                | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1410064125/001/cp-test_multinode-145108-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108:/home/docker/cp-test_multinode-145108-m03_multinode-145108.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n multinode-145108 sudo cat                                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /home/docker/cp-test_multinode-145108-m03_multinode-145108.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02:/home/docker/cp-test_multinode-145108-m03_multinode-145108-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n multinode-145108-m02 sudo cat                                   | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /home/docker/cp-test_multinode-145108-m03_multinode-145108-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-145108 node stop m03                                                          | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	| node    | multinode-145108 node start                                                             | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-145108                                                                | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:55 UTC |                     |
	| stop    | -p multinode-145108                                                                     | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:55 UTC |                     |
	| start   | -p multinode-145108                                                                     | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:57 UTC | 16 Aug 24 00:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-145108                                                                | multinode-145108 | jenkins | v1.33.1 | 16 Aug 24 00:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:57:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:57:02.380271   49141 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:57:02.380514   49141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:57:02.380521   49141 out.go:358] Setting ErrFile to fd 2...
	I0815 23:57:02.380526   49141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:57:02.380690   49141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:57:02.381224   49141 out.go:352] Setting JSON to false
	I0815 23:57:02.382154   49141 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5922,"bootTime":1723760300,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:57:02.382216   49141 start.go:139] virtualization: kvm guest
	I0815 23:57:02.385163   49141 out.go:177] * [multinode-145108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:57:02.386664   49141 notify.go:220] Checking for updates...
	I0815 23:57:02.386667   49141 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:57:02.388072   49141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:57:02.389330   49141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:57:02.390590   49141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:57:02.391918   49141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:57:02.393345   49141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:57:02.395277   49141 config.go:182] Loaded profile config "multinode-145108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:57:02.395390   49141 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:57:02.395953   49141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:57:02.395998   49141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:57:02.411135   49141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0815 23:57:02.411528   49141 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:57:02.412150   49141 main.go:141] libmachine: Using API Version  1
	I0815 23:57:02.412171   49141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:57:02.412574   49141 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:57:02.412858   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:57:02.450542   49141 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 23:57:02.451891   49141 start.go:297] selected driver: kvm2
	I0815 23:57:02.451915   49141 start.go:901] validating driver "kvm2" against &{Name:multinode-145108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-145108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:57:02.452056   49141 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:57:02.452379   49141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:57:02.452448   49141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:57:02.467790   49141 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:57:02.468458   49141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:57:02.468489   49141 cni.go:84] Creating CNI manager for ""
	I0815 23:57:02.468496   49141 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 23:57:02.468554   49141 start.go:340] cluster config:
	{Name:multinode-145108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-145108 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:57:02.468668   49141 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:57:02.470507   49141 out.go:177] * Starting "multinode-145108" primary control-plane node in "multinode-145108" cluster
	I0815 23:57:02.471752   49141 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:57:02.471791   49141 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:57:02.471800   49141 cache.go:56] Caching tarball of preloaded images
	I0815 23:57:02.471890   49141 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:57:02.471904   49141 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:57:02.472043   49141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/config.json ...
	I0815 23:57:02.472253   49141 start.go:360] acquireMachinesLock for multinode-145108: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:57:02.472300   49141 start.go:364] duration metric: took 28.897µs to acquireMachinesLock for "multinode-145108"
	I0815 23:57:02.472315   49141 start.go:96] Skipping create...Using existing machine configuration
	I0815 23:57:02.472324   49141 fix.go:54] fixHost starting: 
	I0815 23:57:02.472636   49141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:57:02.472673   49141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:57:02.487042   49141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38973
	I0815 23:57:02.487444   49141 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:57:02.487885   49141 main.go:141] libmachine: Using API Version  1
	I0815 23:57:02.487903   49141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:57:02.488213   49141 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:57:02.488394   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:57:02.488525   49141 main.go:141] libmachine: (multinode-145108) Calling .GetState
	I0815 23:57:02.490017   49141 fix.go:112] recreateIfNeeded on multinode-145108: state=Running err=<nil>
	W0815 23:57:02.490051   49141 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 23:57:02.492254   49141 out.go:177] * Updating the running kvm2 "multinode-145108" VM ...
	I0815 23:57:02.493481   49141 machine.go:93] provisionDockerMachine start ...
	I0815 23:57:02.493499   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:57:02.493695   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:02.496544   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.497024   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.497043   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.497217   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:02.497404   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.497580   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.497703   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:02.497856   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:57:02.498056   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:57:02.498068   49141 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 23:57:02.603695   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-145108
	
	I0815 23:57:02.603727   49141 main.go:141] libmachine: (multinode-145108) Calling .GetMachineName
	I0815 23:57:02.603948   49141 buildroot.go:166] provisioning hostname "multinode-145108"
	I0815 23:57:02.603974   49141 main.go:141] libmachine: (multinode-145108) Calling .GetMachineName
	I0815 23:57:02.604202   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:02.606591   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.606916   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.606945   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.607129   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:02.607315   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.607469   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.607605   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:02.607763   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:57:02.607976   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:57:02.607992   49141 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-145108 && echo "multinode-145108" | sudo tee /etc/hostname
	I0815 23:57:02.725278   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-145108
	
	I0815 23:57:02.725304   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:02.728177   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.728552   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.728586   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.728797   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:02.728997   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.729258   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.729375   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:02.729536   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:57:02.729735   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:57:02.729753   49141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-145108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-145108/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-145108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:57:02.835741   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:57:02.835775   49141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:57:02.835806   49141 buildroot.go:174] setting up certificates
	I0815 23:57:02.835818   49141 provision.go:84] configureAuth start
	I0815 23:57:02.835827   49141 main.go:141] libmachine: (multinode-145108) Calling .GetMachineName
	I0815 23:57:02.836099   49141 main.go:141] libmachine: (multinode-145108) Calling .GetIP
	I0815 23:57:02.838788   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.839132   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.839158   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.839318   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:02.841862   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.842341   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.842361   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.842558   49141 provision.go:143] copyHostCerts
	I0815 23:57:02.842584   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:57:02.842624   49141 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:57:02.842638   49141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:57:02.842705   49141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:57:02.842809   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:57:02.842828   49141 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:57:02.842833   49141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:57:02.842857   49141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:57:02.842915   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:57:02.842957   49141 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:57:02.842963   49141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:57:02.842984   49141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:57:02.843080   49141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.multinode-145108 san=[127.0.0.1 192.168.39.117 localhost minikube multinode-145108]
	I0815 23:57:03.023336   49141 provision.go:177] copyRemoteCerts
	I0815 23:57:03.023394   49141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:57:03.023420   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:03.026344   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:03.026661   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:03.026680   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:03.026887   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:03.027056   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:03.027235   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:03.027374   49141 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:57:03.111445   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:57:03.111513   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:57:03.139651   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:57:03.139714   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0815 23:57:03.168877   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:57:03.168948   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 23:57:03.194569   49141 provision.go:87] duration metric: took 358.737626ms to configureAuth
	I0815 23:57:03.194599   49141 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:57:03.194842   49141 config.go:182] Loaded profile config "multinode-145108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:57:03.194924   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:03.197637   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:03.198051   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:03.198080   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:03.198333   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:03.198536   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:03.198718   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:03.198839   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:03.198989   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:57:03.199206   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:57:03.199222   49141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:58:33.974079   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:58:33.974126   49141 machine.go:96] duration metric: took 1m31.480631427s to provisionDockerMachine
	I0815 23:58:33.974147   49141 start.go:293] postStartSetup for "multinode-145108" (driver="kvm2")
	I0815 23:58:33.974167   49141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:58:33.974195   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:33.974536   49141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:58:33.974586   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:58:33.977782   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:33.978267   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:33.978297   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:33.978445   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:58:33.978675   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:33.978835   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:58:33.978967   49141 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:58:34.061780   49141 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:58:34.066190   49141 command_runner.go:130] > NAME=Buildroot
	I0815 23:58:34.066207   49141 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0815 23:58:34.066212   49141 command_runner.go:130] > ID=buildroot
	I0815 23:58:34.066223   49141 command_runner.go:130] > VERSION_ID=2023.02.9
	I0815 23:58:34.066228   49141 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0815 23:58:34.066260   49141 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:58:34.066279   49141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:58:34.066350   49141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:58:34.066417   49141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:58:34.066428   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:58:34.066509   49141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:58:34.076221   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:58:34.100977   49141 start.go:296] duration metric: took 126.813763ms for postStartSetup
	I0815 23:58:34.101028   49141 fix.go:56] duration metric: took 1m31.628707548s for fixHost
	I0815 23:58:34.101054   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:58:34.103594   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.103981   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:34.104002   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.104218   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:58:34.104424   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:34.104579   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:34.104707   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:58:34.104868   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:58:34.105029   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:58:34.105039   49141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:58:34.207057   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723766314.184559552
	
	I0815 23:58:34.207086   49141 fix.go:216] guest clock: 1723766314.184559552
	I0815 23:58:34.207106   49141 fix.go:229] Guest: 2024-08-15 23:58:34.184559552 +0000 UTC Remote: 2024-08-15 23:58:34.101036221 +0000 UTC m=+91.753799094 (delta=83.523331ms)
	I0815 23:58:34.207142   49141 fix.go:200] guest clock delta is within tolerance: 83.523331ms
	I0815 23:58:34.207152   49141 start.go:83] releasing machines lock for "multinode-145108", held for 1m31.734841259s
	I0815 23:58:34.207181   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:34.207419   49141 main.go:141] libmachine: (multinode-145108) Calling .GetIP
	I0815 23:58:34.210175   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.210576   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:34.210606   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.210765   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:34.211262   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:34.211449   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:34.211556   49141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:58:34.211597   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:58:34.211663   49141 ssh_runner.go:195] Run: cat /version.json
	I0815 23:58:34.211686   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:58:34.214002   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.214314   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.214366   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:34.214390   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.214523   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:58:34.214688   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:34.214783   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:34.214820   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.214875   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:58:34.214963   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:58:34.215017   49141 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:58:34.215096   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:34.215224   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:58:34.215365   49141 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:58:34.291012   49141 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0815 23:58:34.307997   49141 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0815 23:58:34.308831   49141 ssh_runner.go:195] Run: systemctl --version
	I0815 23:58:34.314815   49141 command_runner.go:130] > systemd 252 (252)
	I0815 23:58:34.314868   49141 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0815 23:58:34.314973   49141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:58:34.477295   49141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 23:58:34.483852   49141 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0815 23:58:34.483886   49141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:58:34.483936   49141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:58:34.493475   49141 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 23:58:34.493499   49141 start.go:495] detecting cgroup driver to use...
	I0815 23:58:34.493551   49141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:58:34.509970   49141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:58:34.524971   49141 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:58:34.525040   49141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:58:34.539462   49141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:58:34.554484   49141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:58:34.702198   49141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:58:34.853294   49141 docker.go:233] disabling docker service ...
	I0815 23:58:34.853369   49141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:58:34.873514   49141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:58:34.888277   49141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:58:35.045106   49141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:58:35.209001   49141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:58:35.224107   49141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:58:35.243864   49141 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0815 23:58:35.244323   49141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:58:35.244383   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.255632   49141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:58:35.255703   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.267712   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.278874   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.291119   49141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:58:35.302671   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.313792   49141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.325718   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.337136   49141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:58:35.348605   49141 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0815 23:58:35.348686   49141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:58:35.365577   49141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:58:35.525361   49141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:58:37.401040   49141 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.875634112s)
	I0815 23:58:37.401086   49141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:58:37.401143   49141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:58:37.409642   49141 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0815 23:58:37.409671   49141 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0815 23:58:37.409681   49141 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0815 23:58:37.409689   49141 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 23:58:37.409696   49141 command_runner.go:130] > Access: 2024-08-15 23:58:37.272665430 +0000
	I0815 23:58:37.409706   49141 command_runner.go:130] > Modify: 2024-08-15 23:58:37.271665407 +0000
	I0815 23:58:37.409715   49141 command_runner.go:130] > Change: 2024-08-15 23:58:37.271665407 +0000
	I0815 23:58:37.409724   49141 command_runner.go:130] >  Birth: -
	I0815 23:58:37.409773   49141 start.go:563] Will wait 60s for crictl version
	I0815 23:58:37.409826   49141 ssh_runner.go:195] Run: which crictl
	I0815 23:58:37.413868   49141 command_runner.go:130] > /usr/bin/crictl
	I0815 23:58:37.413942   49141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:58:37.448793   49141 command_runner.go:130] > Version:  0.1.0
	I0815 23:58:37.448817   49141 command_runner.go:130] > RuntimeName:  cri-o
	I0815 23:58:37.448824   49141 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0815 23:58:37.448831   49141 command_runner.go:130] > RuntimeApiVersion:  v1
	I0815 23:58:37.448953   49141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:58:37.449045   49141 ssh_runner.go:195] Run: crio --version
	I0815 23:58:37.477379   49141 command_runner.go:130] > crio version 1.29.1
	I0815 23:58:37.477403   49141 command_runner.go:130] > Version:        1.29.1
	I0815 23:58:37.477411   49141 command_runner.go:130] > GitCommit:      unknown
	I0815 23:58:37.477417   49141 command_runner.go:130] > GitCommitDate:  unknown
	I0815 23:58:37.477423   49141 command_runner.go:130] > GitTreeState:   clean
	I0815 23:58:37.477431   49141 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0815 23:58:37.477437   49141 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 23:58:37.477443   49141 command_runner.go:130] > Compiler:       gc
	I0815 23:58:37.477460   49141 command_runner.go:130] > Platform:       linux/amd64
	I0815 23:58:37.477472   49141 command_runner.go:130] > Linkmode:       dynamic
	I0815 23:58:37.477478   49141 command_runner.go:130] > BuildTags:      
	I0815 23:58:37.477486   49141 command_runner.go:130] >   containers_image_ostree_stub
	I0815 23:58:37.477494   49141 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 23:58:37.477502   49141 command_runner.go:130] >   btrfs_noversion
	I0815 23:58:37.477507   49141 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 23:58:37.477514   49141 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 23:58:37.477518   49141 command_runner.go:130] >   seccomp
	I0815 23:58:37.477523   49141 command_runner.go:130] > LDFlags:          unknown
	I0815 23:58:37.477527   49141 command_runner.go:130] > SeccompEnabled:   true
	I0815 23:58:37.477532   49141 command_runner.go:130] > AppArmorEnabled:  false
	I0815 23:58:37.477597   49141 ssh_runner.go:195] Run: crio --version
	I0815 23:58:37.512400   49141 command_runner.go:130] > crio version 1.29.1
	I0815 23:58:37.512419   49141 command_runner.go:130] > Version:        1.29.1
	I0815 23:58:37.512425   49141 command_runner.go:130] > GitCommit:      unknown
	I0815 23:58:37.512429   49141 command_runner.go:130] > GitCommitDate:  unknown
	I0815 23:58:37.512433   49141 command_runner.go:130] > GitTreeState:   clean
	I0815 23:58:37.512441   49141 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0815 23:58:37.512448   49141 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 23:58:37.512452   49141 command_runner.go:130] > Compiler:       gc
	I0815 23:58:37.512456   49141 command_runner.go:130] > Platform:       linux/amd64
	I0815 23:58:37.512460   49141 command_runner.go:130] > Linkmode:       dynamic
	I0815 23:58:37.512466   49141 command_runner.go:130] > BuildTags:      
	I0815 23:58:37.512471   49141 command_runner.go:130] >   containers_image_ostree_stub
	I0815 23:58:37.512481   49141 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 23:58:37.512489   49141 command_runner.go:130] >   btrfs_noversion
	I0815 23:58:37.512495   49141 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 23:58:37.512502   49141 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 23:58:37.512509   49141 command_runner.go:130] >   seccomp
	I0815 23:58:37.512516   49141 command_runner.go:130] > LDFlags:          unknown
	I0815 23:58:37.512525   49141 command_runner.go:130] > SeccompEnabled:   true
	I0815 23:58:37.512532   49141 command_runner.go:130] > AppArmorEnabled:  false
	I0815 23:58:37.514401   49141 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:58:37.515885   49141 main.go:141] libmachine: (multinode-145108) Calling .GetIP
	I0815 23:58:37.518390   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:37.518699   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:37.518720   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:37.518899   49141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:58:37.523283   49141 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0815 23:58:37.523506   49141 kubeadm.go:883] updating cluster {Name:multinode-145108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-145108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 23:58:37.523654   49141 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:58:37.523700   49141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:58:37.566357   49141 command_runner.go:130] > {
	I0815 23:58:37.566380   49141 command_runner.go:130] >   "images": [
	I0815 23:58:37.566384   49141 command_runner.go:130] >     {
	I0815 23:58:37.566393   49141 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 23:58:37.566397   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566403   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 23:58:37.566407   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566411   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566423   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 23:58:37.566430   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 23:58:37.566434   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566439   49141 command_runner.go:130] >       "size": "87165492",
	I0815 23:58:37.566446   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566450   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566456   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566460   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566464   49141 command_runner.go:130] >     },
	I0815 23:58:37.566467   49141 command_runner.go:130] >     {
	I0815 23:58:37.566474   49141 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 23:58:37.566479   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566485   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 23:58:37.566491   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566495   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566502   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 23:58:37.566512   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 23:58:37.566516   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566520   49141 command_runner.go:130] >       "size": "87190579",
	I0815 23:58:37.566527   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566535   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566539   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566546   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566549   49141 command_runner.go:130] >     },
	I0815 23:58:37.566552   49141 command_runner.go:130] >     {
	I0815 23:58:37.566558   49141 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 23:58:37.566563   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566568   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 23:58:37.566573   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566578   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566585   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 23:58:37.566594   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 23:58:37.566598   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566605   49141 command_runner.go:130] >       "size": "1363676",
	I0815 23:58:37.566608   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566612   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566616   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566620   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566623   49141 command_runner.go:130] >     },
	I0815 23:58:37.566627   49141 command_runner.go:130] >     {
	I0815 23:58:37.566635   49141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 23:58:37.566639   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566646   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 23:58:37.566649   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566653   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566660   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 23:58:37.566675   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 23:58:37.566680   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566684   49141 command_runner.go:130] >       "size": "31470524",
	I0815 23:58:37.566688   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566692   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566696   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566700   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566703   49141 command_runner.go:130] >     },
	I0815 23:58:37.566707   49141 command_runner.go:130] >     {
	I0815 23:58:37.566713   49141 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 23:58:37.566719   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566724   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 23:58:37.566730   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566734   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566741   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 23:58:37.566750   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 23:58:37.566754   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566758   49141 command_runner.go:130] >       "size": "61245718",
	I0815 23:58:37.566765   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566770   49141 command_runner.go:130] >       "username": "nonroot",
	I0815 23:58:37.566775   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566779   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566785   49141 command_runner.go:130] >     },
	I0815 23:58:37.566788   49141 command_runner.go:130] >     {
	I0815 23:58:37.566794   49141 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 23:58:37.566800   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566806   49141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 23:58:37.566811   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566815   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566824   49141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 23:58:37.566831   49141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 23:58:37.566836   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566840   49141 command_runner.go:130] >       "size": "149009664",
	I0815 23:58:37.566845   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.566850   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.566855   49141 command_runner.go:130] >       },
	I0815 23:58:37.566859   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566865   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566869   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566872   49141 command_runner.go:130] >     },
	I0815 23:58:37.566875   49141 command_runner.go:130] >     {
	I0815 23:58:37.566881   49141 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 23:58:37.566887   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566892   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 23:58:37.566895   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566899   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566906   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 23:58:37.566916   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 23:58:37.566919   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566923   49141 command_runner.go:130] >       "size": "95233506",
	I0815 23:58:37.566926   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.566930   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.566934   49141 command_runner.go:130] >       },
	I0815 23:58:37.566937   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566941   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566945   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566949   49141 command_runner.go:130] >     },
	I0815 23:58:37.566952   49141 command_runner.go:130] >     {
	I0815 23:58:37.566958   49141 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 23:58:37.566964   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566969   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 23:58:37.566975   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566980   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566993   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 23:58:37.567002   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 23:58:37.567015   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567019   49141 command_runner.go:130] >       "size": "89437512",
	I0815 23:58:37.567022   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.567026   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.567029   49141 command_runner.go:130] >       },
	I0815 23:58:37.567033   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.567037   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.567040   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.567043   49141 command_runner.go:130] >     },
	I0815 23:58:37.567046   49141 command_runner.go:130] >     {
	I0815 23:58:37.567052   49141 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 23:58:37.567056   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.567060   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 23:58:37.567063   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567067   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.567074   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 23:58:37.567081   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 23:58:37.567085   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567088   49141 command_runner.go:130] >       "size": "92728217",
	I0815 23:58:37.567092   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.567095   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.567099   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.567103   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.567106   49141 command_runner.go:130] >     },
	I0815 23:58:37.567109   49141 command_runner.go:130] >     {
	I0815 23:58:37.567115   49141 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 23:58:37.567118   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.567123   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 23:58:37.567126   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567130   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.567136   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 23:58:37.567143   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 23:58:37.567146   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567151   49141 command_runner.go:130] >       "size": "68420936",
	I0815 23:58:37.567155   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.567158   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.567162   49141 command_runner.go:130] >       },
	I0815 23:58:37.567166   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.567172   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.567175   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.567180   49141 command_runner.go:130] >     },
	I0815 23:58:37.567183   49141 command_runner.go:130] >     {
	I0815 23:58:37.567189   49141 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 23:58:37.567195   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.567199   49141 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 23:58:37.567203   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567207   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.567213   49141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 23:58:37.567223   49141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 23:58:37.567226   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567230   49141 command_runner.go:130] >       "size": "742080",
	I0815 23:58:37.567236   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.567240   49141 command_runner.go:130] >         "value": "65535"
	I0815 23:58:37.567243   49141 command_runner.go:130] >       },
	I0815 23:58:37.567247   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.567251   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.567255   49141 command_runner.go:130] >       "pinned": true
	I0815 23:58:37.567258   49141 command_runner.go:130] >     }
	I0815 23:58:37.567261   49141 command_runner.go:130] >   ]
	I0815 23:58:37.567264   49141 command_runner.go:130] > }
	I0815 23:58:37.568116   49141 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:58:37.568134   49141 crio.go:433] Images already preloaded, skipping extraction
	I0815 23:58:37.568189   49141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:58:37.601906   49141 command_runner.go:130] > {
	I0815 23:58:37.601931   49141 command_runner.go:130] >   "images": [
	I0815 23:58:37.601937   49141 command_runner.go:130] >     {
	I0815 23:58:37.601945   49141 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 23:58:37.601955   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.601982   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 23:58:37.601990   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602002   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602016   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 23:58:37.602023   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 23:58:37.602027   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602034   49141 command_runner.go:130] >       "size": "87165492",
	I0815 23:58:37.602039   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602045   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602051   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602058   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602061   49141 command_runner.go:130] >     },
	I0815 23:58:37.602067   49141 command_runner.go:130] >     {
	I0815 23:58:37.602074   49141 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 23:58:37.602080   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602085   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 23:58:37.602091   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602095   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602102   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 23:58:37.602111   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 23:58:37.602116   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602120   49141 command_runner.go:130] >       "size": "87190579",
	I0815 23:58:37.602127   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602133   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602140   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602144   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602151   49141 command_runner.go:130] >     },
	I0815 23:58:37.602155   49141 command_runner.go:130] >     {
	I0815 23:58:37.602162   49141 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 23:58:37.602168   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602174   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 23:58:37.602179   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602184   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602193   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 23:58:37.602199   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 23:58:37.602205   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602209   49141 command_runner.go:130] >       "size": "1363676",
	I0815 23:58:37.602215   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602218   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602224   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602228   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602234   49141 command_runner.go:130] >     },
	I0815 23:58:37.602237   49141 command_runner.go:130] >     {
	I0815 23:58:37.602246   49141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 23:58:37.602250   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602255   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 23:58:37.602261   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602266   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602275   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 23:58:37.602288   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 23:58:37.602293   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602297   49141 command_runner.go:130] >       "size": "31470524",
	I0815 23:58:37.602303   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602307   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602313   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602317   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602322   49141 command_runner.go:130] >     },
	I0815 23:58:37.602326   49141 command_runner.go:130] >     {
	I0815 23:58:37.602334   49141 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 23:58:37.602337   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602343   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 23:58:37.602347   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602352   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602361   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 23:58:37.602370   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 23:58:37.602376   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602381   49141 command_runner.go:130] >       "size": "61245718",
	I0815 23:58:37.602385   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602391   49141 command_runner.go:130] >       "username": "nonroot",
	I0815 23:58:37.602395   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602401   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602405   49141 command_runner.go:130] >     },
	I0815 23:58:37.602410   49141 command_runner.go:130] >     {
	I0815 23:58:37.602416   49141 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 23:58:37.602422   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602427   49141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 23:58:37.602432   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602436   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602446   49141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 23:58:37.602452   49141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 23:58:37.602463   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602467   49141 command_runner.go:130] >       "size": "149009664",
	I0815 23:58:37.602470   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602474   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.602478   49141 command_runner.go:130] >       },
	I0815 23:58:37.602482   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602485   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602489   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602492   49141 command_runner.go:130] >     },
	I0815 23:58:37.602495   49141 command_runner.go:130] >     {
	I0815 23:58:37.602501   49141 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 23:58:37.602505   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602510   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 23:58:37.602513   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602518   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602524   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 23:58:37.602534   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 23:58:37.602538   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602548   49141 command_runner.go:130] >       "size": "95233506",
	I0815 23:58:37.602552   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602556   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.602560   49141 command_runner.go:130] >       },
	I0815 23:58:37.602564   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602567   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602571   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602575   49141 command_runner.go:130] >     },
	I0815 23:58:37.602578   49141 command_runner.go:130] >     {
	I0815 23:58:37.602585   49141 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 23:58:37.602589   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602595   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 23:58:37.602598   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602602   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602616   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 23:58:37.602626   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 23:58:37.602629   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602633   49141 command_runner.go:130] >       "size": "89437512",
	I0815 23:58:37.602637   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602641   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.602645   49141 command_runner.go:130] >       },
	I0815 23:58:37.602649   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602653   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602657   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602663   49141 command_runner.go:130] >     },
	I0815 23:58:37.602666   49141 command_runner.go:130] >     {
	I0815 23:58:37.602672   49141 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 23:58:37.602676   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602680   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 23:58:37.602686   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602690   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602696   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 23:58:37.602707   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 23:58:37.602713   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602716   49141 command_runner.go:130] >       "size": "92728217",
	I0815 23:58:37.602720   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602725   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602731   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602735   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602739   49141 command_runner.go:130] >     },
	I0815 23:58:37.602744   49141 command_runner.go:130] >     {
	I0815 23:58:37.602750   49141 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 23:58:37.602754   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602761   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 23:58:37.602764   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602768   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602775   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 23:58:37.602783   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 23:58:37.602787   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602791   49141 command_runner.go:130] >       "size": "68420936",
	I0815 23:58:37.602794   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602798   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.602804   49141 command_runner.go:130] >       },
	I0815 23:58:37.602808   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602812   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602815   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602819   49141 command_runner.go:130] >     },
	I0815 23:58:37.602822   49141 command_runner.go:130] >     {
	I0815 23:58:37.602828   49141 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 23:58:37.602833   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602838   49141 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 23:58:37.602841   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602845   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602851   49141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 23:58:37.602858   49141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 23:58:37.602862   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602866   49141 command_runner.go:130] >       "size": "742080",
	I0815 23:58:37.602870   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602874   49141 command_runner.go:130] >         "value": "65535"
	I0815 23:58:37.602878   49141 command_runner.go:130] >       },
	I0815 23:58:37.602882   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602885   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602890   49141 command_runner.go:130] >       "pinned": true
	I0815 23:58:37.602893   49141 command_runner.go:130] >     }
	I0815 23:58:37.602896   49141 command_runner.go:130] >   ]
	I0815 23:58:37.602901   49141 command_runner.go:130] > }
	I0815 23:58:37.603463   49141 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:58:37.603476   49141 cache_images.go:84] Images are preloaded, skipping loading
	I0815 23:58:37.603490   49141 kubeadm.go:934] updating node { 192.168.39.117 8443 v1.31.0 crio true true} ...
	I0815 23:58:37.603608   49141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-145108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-145108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:58:37.603670   49141 ssh_runner.go:195] Run: crio config
	I0815 23:58:37.644464   49141 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0815 23:58:37.644496   49141 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0815 23:58:37.644507   49141 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0815 23:58:37.644512   49141 command_runner.go:130] > #
	I0815 23:58:37.644522   49141 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0815 23:58:37.644539   49141 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0815 23:58:37.644552   49141 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0815 23:58:37.644564   49141 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0815 23:58:37.644573   49141 command_runner.go:130] > # reload'.
	I0815 23:58:37.644583   49141 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0815 23:58:37.644591   49141 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0815 23:58:37.644599   49141 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0815 23:58:37.644611   49141 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0815 23:58:37.644618   49141 command_runner.go:130] > [crio]
	I0815 23:58:37.644627   49141 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0815 23:58:37.644639   49141 command_runner.go:130] > # containers images, in this directory.
	I0815 23:58:37.644648   49141 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0815 23:58:37.644662   49141 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0815 23:58:37.644668   49141 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0815 23:58:37.644676   49141 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0815 23:58:37.644926   49141 command_runner.go:130] > # imagestore = ""
	I0815 23:58:37.644950   49141 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0815 23:58:37.644962   49141 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0815 23:58:37.645036   49141 command_runner.go:130] > storage_driver = "overlay"
	I0815 23:58:37.645052   49141 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0815 23:58:37.645080   49141 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0815 23:58:37.645089   49141 command_runner.go:130] > storage_option = [
	I0815 23:58:37.645218   49141 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0815 23:58:37.645247   49141 command_runner.go:130] > ]
	I0815 23:58:37.645259   49141 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0815 23:58:37.645271   49141 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0815 23:58:37.645546   49141 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0815 23:58:37.645562   49141 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0815 23:58:37.645572   49141 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0815 23:58:37.645583   49141 command_runner.go:130] > # always happen on a node reboot
	I0815 23:58:37.645928   49141 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0815 23:58:37.645950   49141 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0815 23:58:37.645959   49141 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0815 23:58:37.645967   49141 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0815 23:58:37.646110   49141 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0815 23:58:37.646126   49141 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0815 23:58:37.646138   49141 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0815 23:58:37.646519   49141 command_runner.go:130] > # internal_wipe = true
	I0815 23:58:37.646541   49141 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0815 23:58:37.646551   49141 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0815 23:58:37.646863   49141 command_runner.go:130] > # internal_repair = false
	I0815 23:58:37.646875   49141 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0815 23:58:37.646882   49141 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0815 23:58:37.646891   49141 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0815 23:58:37.647157   49141 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0815 23:58:37.647174   49141 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0815 23:58:37.647180   49141 command_runner.go:130] > [crio.api]
	I0815 23:58:37.647192   49141 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0815 23:58:37.647438   49141 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0815 23:58:37.647452   49141 command_runner.go:130] > # IP address on which the stream server will listen.
	I0815 23:58:37.647794   49141 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0815 23:58:37.647806   49141 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0815 23:58:37.647812   49141 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0815 23:58:37.648055   49141 command_runner.go:130] > # stream_port = "0"
	I0815 23:58:37.648065   49141 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0815 23:58:37.648338   49141 command_runner.go:130] > # stream_enable_tls = false
	I0815 23:58:37.648355   49141 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0815 23:58:37.648554   49141 command_runner.go:130] > # stream_idle_timeout = ""
	I0815 23:58:37.648564   49141 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0815 23:58:37.648570   49141 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0815 23:58:37.648574   49141 command_runner.go:130] > # minutes.
	I0815 23:58:37.648852   49141 command_runner.go:130] > # stream_tls_cert = ""
	I0815 23:58:37.648863   49141 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0815 23:58:37.648869   49141 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0815 23:58:37.649069   49141 command_runner.go:130] > # stream_tls_key = ""
	I0815 23:58:37.649079   49141 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0815 23:58:37.649085   49141 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0815 23:58:37.649107   49141 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0815 23:58:37.649308   49141 command_runner.go:130] > # stream_tls_ca = ""
	I0815 23:58:37.649319   49141 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 23:58:37.649480   49141 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0815 23:58:37.649497   49141 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 23:58:37.649662   49141 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0815 23:58:37.649672   49141 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0815 23:58:37.649678   49141 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0815 23:58:37.649682   49141 command_runner.go:130] > [crio.runtime]
	I0815 23:58:37.649691   49141 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0815 23:58:37.649703   49141 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0815 23:58:37.649714   49141 command_runner.go:130] > # "nofile=1024:2048"
	I0815 23:58:37.649731   49141 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0815 23:58:37.649852   49141 command_runner.go:130] > # default_ulimits = [
	I0815 23:58:37.649932   49141 command_runner.go:130] > # ]
	I0815 23:58:37.649951   49141 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0815 23:58:37.649957   49141 command_runner.go:130] > # no_pivot = false
	I0815 23:58:37.649966   49141 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0815 23:58:37.649975   49141 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0815 23:58:37.649988   49141 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0815 23:58:37.649997   49141 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0815 23:58:37.650007   49141 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0815 23:58:37.650018   49141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 23:58:37.650028   49141 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0815 23:58:37.650035   49141 command_runner.go:130] > # Cgroup setting for conmon
	I0815 23:58:37.650050   49141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0815 23:58:37.650060   49141 command_runner.go:130] > conmon_cgroup = "pod"
	I0815 23:58:37.650069   49141 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0815 23:58:37.650077   49141 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0815 23:58:37.650094   49141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 23:58:37.650103   49141 command_runner.go:130] > conmon_env = [
	I0815 23:58:37.650114   49141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 23:58:37.650123   49141 command_runner.go:130] > ]
	I0815 23:58:37.650132   49141 command_runner.go:130] > # Additional environment variables to set for all the
	I0815 23:58:37.650143   49141 command_runner.go:130] > # containers. These are overridden if set in the
	I0815 23:58:37.650153   49141 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0815 23:58:37.650161   49141 command_runner.go:130] > # default_env = [
	I0815 23:58:37.650166   49141 command_runner.go:130] > # ]
	I0815 23:58:37.650179   49141 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0815 23:58:37.650193   49141 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0815 23:58:37.650203   49141 command_runner.go:130] > # selinux = false
	I0815 23:58:37.650213   49141 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0815 23:58:37.650226   49141 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0815 23:58:37.650238   49141 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0815 23:58:37.650244   49141 command_runner.go:130] > # seccomp_profile = ""
	I0815 23:58:37.650256   49141 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0815 23:58:37.650268   49141 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0815 23:58:37.650281   49141 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0815 23:58:37.650294   49141 command_runner.go:130] > # which might increase security.
	I0815 23:58:37.650304   49141 command_runner.go:130] > # This option is currently deprecated,
	I0815 23:58:37.650314   49141 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0815 23:58:37.650325   49141 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0815 23:58:37.650336   49141 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0815 23:58:37.650349   49141 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0815 23:58:37.650361   49141 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0815 23:58:37.650375   49141 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0815 23:58:37.650387   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.650398   49141 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0815 23:58:37.650409   49141 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0815 23:58:37.650417   49141 command_runner.go:130] > # the cgroup blockio controller.
	I0815 23:58:37.650428   49141 command_runner.go:130] > # blockio_config_file = ""
	I0815 23:58:37.650438   49141 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0815 23:58:37.650448   49141 command_runner.go:130] > # blockio parameters.
	I0815 23:58:37.650455   49141 command_runner.go:130] > # blockio_reload = false
	I0815 23:58:37.650468   49141 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0815 23:58:37.650478   49141 command_runner.go:130] > # irqbalance daemon.
	I0815 23:58:37.650487   49141 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0815 23:58:37.650498   49141 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0815 23:58:37.650513   49141 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0815 23:58:37.650523   49141 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0815 23:58:37.650536   49141 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0815 23:58:37.650548   49141 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0815 23:58:37.650559   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.650568   49141 command_runner.go:130] > # rdt_config_file = ""
	I0815 23:58:37.650580   49141 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0815 23:58:37.650588   49141 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0815 23:58:37.650632   49141 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0815 23:58:37.650643   49141 command_runner.go:130] > # separate_pull_cgroup = ""
	I0815 23:58:37.650653   49141 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0815 23:58:37.650665   49141 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0815 23:58:37.650674   49141 command_runner.go:130] > # will be added.
	I0815 23:58:37.650680   49141 command_runner.go:130] > # default_capabilities = [
	I0815 23:58:37.650689   49141 command_runner.go:130] > # 	"CHOWN",
	I0815 23:58:37.650695   49141 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0815 23:58:37.650706   49141 command_runner.go:130] > # 	"FSETID",
	I0815 23:58:37.650714   49141 command_runner.go:130] > # 	"FOWNER",
	I0815 23:58:37.650719   49141 command_runner.go:130] > # 	"SETGID",
	I0815 23:58:37.650728   49141 command_runner.go:130] > # 	"SETUID",
	I0815 23:58:37.650734   49141 command_runner.go:130] > # 	"SETPCAP",
	I0815 23:58:37.650744   49141 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0815 23:58:37.650750   49141 command_runner.go:130] > # 	"KILL",
	I0815 23:58:37.650758   49141 command_runner.go:130] > # ]
	I0815 23:58:37.650775   49141 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0815 23:58:37.650789   49141 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0815 23:58:37.650802   49141 command_runner.go:130] > # add_inheritable_capabilities = false
	I0815 23:58:37.650814   49141 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0815 23:58:37.650826   49141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 23:58:37.650835   49141 command_runner.go:130] > default_sysctls = [
	I0815 23:58:37.650844   49141 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0815 23:58:37.650853   49141 command_runner.go:130] > ]
	I0815 23:58:37.650860   49141 command_runner.go:130] > # List of devices on the host that a
	I0815 23:58:37.650872   49141 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0815 23:58:37.650882   49141 command_runner.go:130] > # allowed_devices = [
	I0815 23:58:37.650888   49141 command_runner.go:130] > # 	"/dev/fuse",
	I0815 23:58:37.650896   49141 command_runner.go:130] > # ]
	I0815 23:58:37.650904   49141 command_runner.go:130] > # List of additional devices. specified as
	I0815 23:58:37.650918   49141 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0815 23:58:37.650929   49141 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0815 23:58:37.650938   49141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 23:58:37.650949   49141 command_runner.go:130] > # additional_devices = [
	I0815 23:58:37.650954   49141 command_runner.go:130] > # ]
	I0815 23:58:37.650965   49141 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0815 23:58:37.650975   49141 command_runner.go:130] > # cdi_spec_dirs = [
	I0815 23:58:37.650981   49141 command_runner.go:130] > # 	"/etc/cdi",
	I0815 23:58:37.650990   49141 command_runner.go:130] > # 	"/var/run/cdi",
	I0815 23:58:37.650995   49141 command_runner.go:130] > # ]
	I0815 23:58:37.651007   49141 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0815 23:58:37.651020   49141 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0815 23:58:37.651030   49141 command_runner.go:130] > # Defaults to false.
	I0815 23:58:37.651037   49141 command_runner.go:130] > # device_ownership_from_security_context = false
	I0815 23:58:37.651053   49141 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0815 23:58:37.651067   49141 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0815 23:58:37.651073   49141 command_runner.go:130] > # hooks_dir = [
	I0815 23:58:37.651081   49141 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0815 23:58:37.651089   49141 command_runner.go:130] > # ]
	I0815 23:58:37.651098   49141 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0815 23:58:37.651112   49141 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0815 23:58:37.651123   49141 command_runner.go:130] > # its default mounts from the following two files:
	I0815 23:58:37.651128   49141 command_runner.go:130] > #
	I0815 23:58:37.651143   49141 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0815 23:58:37.651160   49141 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0815 23:58:37.651172   49141 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0815 23:58:37.651180   49141 command_runner.go:130] > #
	I0815 23:58:37.651190   49141 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0815 23:58:37.651202   49141 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0815 23:58:37.651214   49141 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0815 23:58:37.651222   49141 command_runner.go:130] > #      only add mounts it finds in this file.
	I0815 23:58:37.651230   49141 command_runner.go:130] > #
	I0815 23:58:37.651237   49141 command_runner.go:130] > # default_mounts_file = ""
	I0815 23:58:37.651248   49141 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0815 23:58:37.651259   49141 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0815 23:58:37.651270   49141 command_runner.go:130] > pids_limit = 1024
	I0815 23:58:37.651280   49141 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0815 23:58:37.651289   49141 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0815 23:58:37.651304   49141 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0815 23:58:37.651315   49141 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0815 23:58:37.651328   49141 command_runner.go:130] > # log_size_max = -1
	I0815 23:58:37.651342   49141 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0815 23:58:37.651352   49141 command_runner.go:130] > # log_to_journald = false
	I0815 23:58:37.651361   49141 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0815 23:58:37.651368   49141 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0815 23:58:37.651380   49141 command_runner.go:130] > # Path to directory for container attach sockets.
	I0815 23:58:37.651391   49141 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0815 23:58:37.651403   49141 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0815 23:58:37.651413   49141 command_runner.go:130] > # bind_mount_prefix = ""
	I0815 23:58:37.651423   49141 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0815 23:58:37.651449   49141 command_runner.go:130] > # read_only = false
	I0815 23:58:37.651462   49141 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0815 23:58:37.651475   49141 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0815 23:58:37.651485   49141 command_runner.go:130] > # live configuration reload.
	I0815 23:58:37.651492   49141 command_runner.go:130] > # log_level = "info"
	I0815 23:58:37.651503   49141 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0815 23:58:37.651512   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.651519   49141 command_runner.go:130] > # log_filter = ""
	I0815 23:58:37.651529   49141 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0815 23:58:37.651543   49141 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0815 23:58:37.651548   49141 command_runner.go:130] > # separated by comma.
	I0815 23:58:37.651561   49141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 23:58:37.651570   49141 command_runner.go:130] > # uid_mappings = ""
	I0815 23:58:37.651580   49141 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0815 23:58:37.651592   49141 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0815 23:58:37.651602   49141 command_runner.go:130] > # separated by comma.
	I0815 23:58:37.651613   49141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 23:58:37.651623   49141 command_runner.go:130] > # gid_mappings = ""
	I0815 23:58:37.651651   49141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0815 23:58:37.651665   49141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 23:58:37.651675   49141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 23:58:37.651689   49141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 23:58:37.651710   49141 command_runner.go:130] > # minimum_mappable_uid = -1
	I0815 23:58:37.651724   49141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0815 23:58:37.651733   49141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 23:58:37.651745   49141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 23:58:37.651757   49141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 23:58:37.651769   49141 command_runner.go:130] > # minimum_mappable_gid = -1
	I0815 23:58:37.651780   49141 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0815 23:58:37.651792   49141 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0815 23:58:37.651805   49141 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0815 23:58:37.651815   49141 command_runner.go:130] > # ctr_stop_timeout = 30
	I0815 23:58:37.651825   49141 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0815 23:58:37.651837   49141 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0815 23:58:37.651847   49141 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0815 23:58:37.651855   49141 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0815 23:58:37.651872   49141 command_runner.go:130] > drop_infra_ctr = false
	I0815 23:58:37.651885   49141 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0815 23:58:37.651896   49141 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0815 23:58:37.651910   49141 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0815 23:58:37.651920   49141 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0815 23:58:37.651930   49141 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0815 23:58:37.651943   49141 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0815 23:58:37.651952   49141 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0815 23:58:37.651965   49141 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0815 23:58:37.651972   49141 command_runner.go:130] > # shared_cpuset = ""
	I0815 23:58:37.651981   49141 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0815 23:58:37.651992   49141 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0815 23:58:37.652002   49141 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0815 23:58:37.652013   49141 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0815 23:58:37.652023   49141 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0815 23:58:37.652032   49141 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0815 23:58:37.652044   49141 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0815 23:58:37.652054   49141 command_runner.go:130] > # enable_criu_support = false
	I0815 23:58:37.652062   49141 command_runner.go:130] > # Enable/disable the generation of the container,
	I0815 23:58:37.652075   49141 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0815 23:58:37.652084   49141 command_runner.go:130] > # enable_pod_events = false
	I0815 23:58:37.652095   49141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 23:58:37.652108   49141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 23:58:37.652119   49141 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0815 23:58:37.652129   49141 command_runner.go:130] > # default_runtime = "runc"
	I0815 23:58:37.652140   49141 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0815 23:58:37.652154   49141 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0815 23:58:37.652171   49141 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0815 23:58:37.652182   49141 command_runner.go:130] > # creation as a file is not desired either.
	I0815 23:58:37.652195   49141 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0815 23:58:37.652205   49141 command_runner.go:130] > # the hostname is being managed dynamically.
	I0815 23:58:37.652216   49141 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0815 23:58:37.652222   49141 command_runner.go:130] > # ]
	I0815 23:58:37.652235   49141 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0815 23:58:37.652247   49141 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0815 23:58:37.652259   49141 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0815 23:58:37.652271   49141 command_runner.go:130] > # Each entry in the table should follow the format:
	I0815 23:58:37.652279   49141 command_runner.go:130] > #
	I0815 23:58:37.652287   49141 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0815 23:58:37.652297   49141 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0815 23:58:37.652318   49141 command_runner.go:130] > # runtime_type = "oci"
	I0815 23:58:37.652328   49141 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0815 23:58:37.652336   49141 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0815 23:58:37.652346   49141 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0815 23:58:37.652353   49141 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0815 23:58:37.652361   49141 command_runner.go:130] > # monitor_env = []
	I0815 23:58:37.652368   49141 command_runner.go:130] > # privileged_without_host_devices = false
	I0815 23:58:37.652377   49141 command_runner.go:130] > # allowed_annotations = []
	I0815 23:58:37.652391   49141 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0815 23:58:37.652400   49141 command_runner.go:130] > # Where:
	I0815 23:58:37.652408   49141 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0815 23:58:37.652421   49141 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0815 23:58:37.652433   49141 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0815 23:58:37.652446   49141 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0815 23:58:37.652455   49141 command_runner.go:130] > #   in $PATH.
	I0815 23:58:37.652464   49141 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0815 23:58:37.652476   49141 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0815 23:58:37.652489   49141 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0815 23:58:37.652497   49141 command_runner.go:130] > #   state.
	I0815 23:58:37.652507   49141 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0815 23:58:37.652520   49141 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0815 23:58:37.652530   49141 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0815 23:58:37.652541   49141 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0815 23:58:37.652552   49141 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0815 23:58:37.652565   49141 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0815 23:58:37.652575   49141 command_runner.go:130] > #   The currently recognized values are:
	I0815 23:58:37.652587   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0815 23:58:37.652602   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0815 23:58:37.652613   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0815 23:58:37.652625   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0815 23:58:37.652639   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0815 23:58:37.652651   49141 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0815 23:58:37.652665   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0815 23:58:37.652678   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0815 23:58:37.652690   49141 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0815 23:58:37.652703   49141 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0815 23:58:37.652713   49141 command_runner.go:130] > #   deprecated option "conmon".
	I0815 23:58:37.652726   49141 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0815 23:58:37.652737   49141 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0815 23:58:37.652749   49141 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0815 23:58:37.652760   49141 command_runner.go:130] > #   should be moved to the container's cgroup
	I0815 23:58:37.652777   49141 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0815 23:58:37.652788   49141 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0815 23:58:37.652800   49141 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0815 23:58:37.652812   49141 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0815 23:58:37.652820   49141 command_runner.go:130] > #
	I0815 23:58:37.652827   49141 command_runner.go:130] > # Using the seccomp notifier feature:
	I0815 23:58:37.652835   49141 command_runner.go:130] > #
	I0815 23:58:37.652844   49141 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0815 23:58:37.652856   49141 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0815 23:58:37.652864   49141 command_runner.go:130] > #
	I0815 23:58:37.652873   49141 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0815 23:58:37.652887   49141 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0815 23:58:37.652895   49141 command_runner.go:130] > #
	I0815 23:58:37.652905   49141 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0815 23:58:37.652913   49141 command_runner.go:130] > # feature.
	I0815 23:58:37.652918   49141 command_runner.go:130] > #
	I0815 23:58:37.652930   49141 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0815 23:58:37.652941   49141 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0815 23:58:37.652950   49141 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0815 23:58:37.652958   49141 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0815 23:58:37.652967   49141 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0815 23:58:37.652972   49141 command_runner.go:130] > #
	I0815 23:58:37.652979   49141 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0815 23:58:37.652987   49141 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0815 23:58:37.652993   49141 command_runner.go:130] > #
	I0815 23:58:37.652999   49141 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0815 23:58:37.653007   49141 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0815 23:58:37.653012   49141 command_runner.go:130] > #
	I0815 23:58:37.653019   49141 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0815 23:58:37.653028   49141 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0815 23:58:37.653034   49141 command_runner.go:130] > # limitation.
	I0815 23:58:37.653040   49141 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0815 23:58:37.653046   49141 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0815 23:58:37.653050   49141 command_runner.go:130] > runtime_type = "oci"
	I0815 23:58:37.653056   49141 command_runner.go:130] > runtime_root = "/run/runc"
	I0815 23:58:37.653060   49141 command_runner.go:130] > runtime_config_path = ""
	I0815 23:58:37.653073   49141 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0815 23:58:37.653079   49141 command_runner.go:130] > monitor_cgroup = "pod"
	I0815 23:58:37.653083   49141 command_runner.go:130] > monitor_exec_cgroup = ""
	I0815 23:58:37.653089   49141 command_runner.go:130] > monitor_env = [
	I0815 23:58:37.653095   49141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 23:58:37.653101   49141 command_runner.go:130] > ]
	I0815 23:58:37.653105   49141 command_runner.go:130] > privileged_without_host_devices = false
	I0815 23:58:37.653113   49141 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0815 23:58:37.653122   49141 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0815 23:58:37.653128   49141 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0815 23:58:37.653137   49141 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0815 23:58:37.653146   49141 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0815 23:58:37.653154   49141 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0815 23:58:37.653164   49141 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0815 23:58:37.653173   49141 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0815 23:58:37.653181   49141 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0815 23:58:37.653191   49141 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0815 23:58:37.653194   49141 command_runner.go:130] > # Example:
	I0815 23:58:37.653199   49141 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0815 23:58:37.653203   49141 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0815 23:58:37.653207   49141 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0815 23:58:37.653212   49141 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0815 23:58:37.653215   49141 command_runner.go:130] > # cpuset = 0
	I0815 23:58:37.653219   49141 command_runner.go:130] > # cpushares = "0-1"
	I0815 23:58:37.653222   49141 command_runner.go:130] > # Where:
	I0815 23:58:37.653226   49141 command_runner.go:130] > # The workload name is workload-type.
	I0815 23:58:37.653232   49141 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0815 23:58:37.653238   49141 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0815 23:58:37.653243   49141 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0815 23:58:37.653251   49141 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0815 23:58:37.653256   49141 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0815 23:58:37.653261   49141 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0815 23:58:37.653267   49141 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0815 23:58:37.653271   49141 command_runner.go:130] > # Default value is set to true
	I0815 23:58:37.653275   49141 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0815 23:58:37.653280   49141 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0815 23:58:37.653284   49141 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0815 23:58:37.653288   49141 command_runner.go:130] > # Default value is set to 'false'
	I0815 23:58:37.653292   49141 command_runner.go:130] > # disable_hostport_mapping = false
	I0815 23:58:37.653298   49141 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0815 23:58:37.653301   49141 command_runner.go:130] > #
	I0815 23:58:37.653306   49141 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0815 23:58:37.653312   49141 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0815 23:58:37.653317   49141 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0815 23:58:37.653332   49141 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0815 23:58:37.653341   49141 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0815 23:58:37.653344   49141 command_runner.go:130] > [crio.image]
	I0815 23:58:37.653350   49141 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0815 23:58:37.653354   49141 command_runner.go:130] > # default_transport = "docker://"
	I0815 23:58:37.653360   49141 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0815 23:58:37.653366   49141 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0815 23:58:37.653369   49141 command_runner.go:130] > # global_auth_file = ""
	I0815 23:58:37.653376   49141 command_runner.go:130] > # The image used to instantiate infra containers.
	I0815 23:58:37.653384   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.653388   49141 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0815 23:58:37.653396   49141 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0815 23:58:37.653402   49141 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0815 23:58:37.653408   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.653413   49141 command_runner.go:130] > # pause_image_auth_file = ""
	I0815 23:58:37.653419   49141 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0815 23:58:37.653427   49141 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0815 23:58:37.653435   49141 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0815 23:58:37.653443   49141 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0815 23:58:37.653448   49141 command_runner.go:130] > # pause_command = "/pause"
	I0815 23:58:37.653456   49141 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0815 23:58:37.653463   49141 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0815 23:58:37.653471   49141 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0815 23:58:37.653477   49141 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0815 23:58:37.653485   49141 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0815 23:58:37.653491   49141 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0815 23:58:37.653497   49141 command_runner.go:130] > # pinned_images = [
	I0815 23:58:37.653500   49141 command_runner.go:130] > # ]
	I0815 23:58:37.653508   49141 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0815 23:58:37.653514   49141 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0815 23:58:37.653522   49141 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0815 23:58:37.653530   49141 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0815 23:58:37.653535   49141 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0815 23:58:37.653541   49141 command_runner.go:130] > # signature_policy = ""
	I0815 23:58:37.653546   49141 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0815 23:58:37.653555   49141 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0815 23:58:37.653562   49141 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0815 23:58:37.653568   49141 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0815 23:58:37.653576   49141 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0815 23:58:37.653581   49141 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0815 23:58:37.653589   49141 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0815 23:58:37.653597   49141 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0815 23:58:37.653601   49141 command_runner.go:130] > # changing them here.
	I0815 23:58:37.653607   49141 command_runner.go:130] > # insecure_registries = [
	I0815 23:58:37.653610   49141 command_runner.go:130] > # ]
	I0815 23:58:37.653618   49141 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0815 23:58:37.653625   49141 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0815 23:58:37.653629   49141 command_runner.go:130] > # image_volumes = "mkdir"
	I0815 23:58:37.653636   49141 command_runner.go:130] > # Temporary directory to use for storing big files
	I0815 23:58:37.653640   49141 command_runner.go:130] > # big_files_temporary_dir = ""
	I0815 23:58:37.653648   49141 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0815 23:58:37.653656   49141 command_runner.go:130] > # CNI plugins.
	I0815 23:58:37.653662   49141 command_runner.go:130] > [crio.network]
	I0815 23:58:37.653672   49141 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0815 23:58:37.653683   49141 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0815 23:58:37.653693   49141 command_runner.go:130] > # cni_default_network = ""
	I0815 23:58:37.653705   49141 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0815 23:58:37.653715   49141 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0815 23:58:37.653726   49141 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0815 23:58:37.653736   49141 command_runner.go:130] > # plugin_dirs = [
	I0815 23:58:37.653746   49141 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0815 23:58:37.653753   49141 command_runner.go:130] > # ]
	I0815 23:58:37.653758   49141 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0815 23:58:37.653768   49141 command_runner.go:130] > [crio.metrics]
	I0815 23:58:37.653775   49141 command_runner.go:130] > # Globally enable or disable metrics support.
	I0815 23:58:37.653779   49141 command_runner.go:130] > enable_metrics = true
	I0815 23:58:37.653786   49141 command_runner.go:130] > # Specify enabled metrics collectors.
	I0815 23:58:37.653791   49141 command_runner.go:130] > # Per default all metrics are enabled.
	I0815 23:58:37.653799   49141 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0815 23:58:37.653808   49141 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0815 23:58:37.653815   49141 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0815 23:58:37.653819   49141 command_runner.go:130] > # metrics_collectors = [
	I0815 23:58:37.653825   49141 command_runner.go:130] > # 	"operations",
	I0815 23:58:37.653830   49141 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0815 23:58:37.653836   49141 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0815 23:58:37.653851   49141 command_runner.go:130] > # 	"operations_errors",
	I0815 23:58:37.653862   49141 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0815 23:58:37.653869   49141 command_runner.go:130] > # 	"image_pulls_by_name",
	I0815 23:58:37.653876   49141 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0815 23:58:37.653885   49141 command_runner.go:130] > # 	"image_pulls_failures",
	I0815 23:58:37.653890   49141 command_runner.go:130] > # 	"image_pulls_successes",
	I0815 23:58:37.653897   49141 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0815 23:58:37.653901   49141 command_runner.go:130] > # 	"image_layer_reuse",
	I0815 23:58:37.653908   49141 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0815 23:58:37.653913   49141 command_runner.go:130] > # 	"containers_oom_total",
	I0815 23:58:37.653919   49141 command_runner.go:130] > # 	"containers_oom",
	I0815 23:58:37.653923   49141 command_runner.go:130] > # 	"processes_defunct",
	I0815 23:58:37.653929   49141 command_runner.go:130] > # 	"operations_total",
	I0815 23:58:37.653933   49141 command_runner.go:130] > # 	"operations_latency_seconds",
	I0815 23:58:37.653940   49141 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0815 23:58:37.653945   49141 command_runner.go:130] > # 	"operations_errors_total",
	I0815 23:58:37.653952   49141 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0815 23:58:37.653957   49141 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0815 23:58:37.653966   49141 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0815 23:58:37.653976   49141 command_runner.go:130] > # 	"image_pulls_success_total",
	I0815 23:58:37.653986   49141 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0815 23:58:37.653996   49141 command_runner.go:130] > # 	"containers_oom_count_total",
	I0815 23:58:37.654006   49141 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0815 23:58:37.654012   49141 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0815 23:58:37.654020   49141 command_runner.go:130] > # ]
	I0815 23:58:37.654031   49141 command_runner.go:130] > # The port on which the metrics server will listen.
	I0815 23:58:37.654039   49141 command_runner.go:130] > # metrics_port = 9090
	I0815 23:58:37.654049   49141 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0815 23:58:37.654055   49141 command_runner.go:130] > # metrics_socket = ""
	I0815 23:58:37.654066   49141 command_runner.go:130] > # The certificate for the secure metrics server.
	I0815 23:58:37.654079   49141 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0815 23:58:37.654090   49141 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0815 23:58:37.654100   49141 command_runner.go:130] > # certificate on any modification event.
	I0815 23:58:37.654110   49141 command_runner.go:130] > # metrics_cert = ""
	I0815 23:58:37.654120   49141 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0815 23:58:37.654131   49141 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0815 23:58:37.654139   49141 command_runner.go:130] > # metrics_key = ""
	I0815 23:58:37.654148   49141 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0815 23:58:37.654157   49141 command_runner.go:130] > [crio.tracing]
	I0815 23:58:37.654166   49141 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0815 23:58:37.654175   49141 command_runner.go:130] > # enable_tracing = false
	I0815 23:58:37.654181   49141 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0815 23:58:37.654188   49141 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0815 23:58:37.654194   49141 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0815 23:58:37.654201   49141 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0815 23:58:37.654205   49141 command_runner.go:130] > # CRI-O NRI configuration.
	I0815 23:58:37.654210   49141 command_runner.go:130] > [crio.nri]
	I0815 23:58:37.654214   49141 command_runner.go:130] > # Globally enable or disable NRI.
	I0815 23:58:37.654219   49141 command_runner.go:130] > # enable_nri = false
	I0815 23:58:37.654224   49141 command_runner.go:130] > # NRI socket to listen on.
	I0815 23:58:37.654230   49141 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0815 23:58:37.654234   49141 command_runner.go:130] > # NRI plugin directory to use.
	I0815 23:58:37.654240   49141 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0815 23:58:37.654246   49141 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0815 23:58:37.654251   49141 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0815 23:58:37.654259   49141 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0815 23:58:37.654263   49141 command_runner.go:130] > # nri_disable_connections = false
	I0815 23:58:37.654272   49141 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0815 23:58:37.654282   49141 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0815 23:58:37.654294   49141 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0815 23:58:37.654303   49141 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0815 23:58:37.654315   49141 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0815 23:58:37.654323   49141 command_runner.go:130] > [crio.stats]
	I0815 23:58:37.654334   49141 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0815 23:58:37.654345   49141 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0815 23:58:37.654354   49141 command_runner.go:130] > # stats_collection_period = 0
	I0815 23:58:37.654396   49141 command_runner.go:130] ! time="2024-08-15 23:58:37.613055484Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0815 23:58:37.654413   49141 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0815 23:58:37.654547   49141 cni.go:84] Creating CNI manager for ""
	I0815 23:58:37.654559   49141 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 23:58:37.654569   49141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 23:58:37.654595   49141 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.117 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-145108 NodeName:multinode-145108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 23:58:37.654739   49141 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-145108"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 23:58:37.654817   49141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:58:37.665097   49141 command_runner.go:130] > kubeadm
	I0815 23:58:37.665119   49141 command_runner.go:130] > kubectl
	I0815 23:58:37.665124   49141 command_runner.go:130] > kubelet
	I0815 23:58:37.665145   49141 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 23:58:37.665201   49141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 23:58:37.674993   49141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0815 23:58:37.692734   49141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:58:37.709859   49141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0815 23:58:37.726803   49141 ssh_runner.go:195] Run: grep 192.168.39.117	control-plane.minikube.internal$ /etc/hosts
	I0815 23:58:37.731013   49141 command_runner.go:130] > 192.168.39.117	control-plane.minikube.internal
	I0815 23:58:37.731111   49141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:58:37.865714   49141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:58:37.883417   49141 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108 for IP: 192.168.39.117
	I0815 23:58:37.883447   49141 certs.go:194] generating shared ca certs ...
	I0815 23:58:37.883470   49141 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:58:37.883674   49141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:58:37.883733   49141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:58:37.883752   49141 certs.go:256] generating profile certs ...
	I0815 23:58:37.883862   49141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/client.key
	I0815 23:58:37.883923   49141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.key.cfce1887
	I0815 23:58:37.883973   49141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.key
	I0815 23:58:37.883984   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:58:37.883996   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:58:37.884009   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:58:37.884019   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:58:37.884031   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:58:37.884044   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:58:37.884066   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:58:37.884078   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:58:37.884137   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:58:37.884163   49141 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:58:37.884175   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:58:37.884198   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:58:37.884220   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:58:37.884242   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:58:37.884282   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:58:37.884309   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:58:37.884324   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:37.884338   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:58:37.884945   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:58:37.909748   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:58:37.934154   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:58:37.958235   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:58:37.985223   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 23:58:38.010104   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 23:58:38.034503   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:58:38.059116   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 23:58:38.083402   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:58:38.107458   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:58:38.133204   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:58:38.157964   49141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 23:58:38.175522   49141 ssh_runner.go:195] Run: openssl version
	I0815 23:58:38.181800   49141 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0815 23:58:38.181874   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:58:38.192838   49141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:58:38.197430   49141 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:58:38.197586   49141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:58:38.197648   49141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:58:38.203608   49141 command_runner.go:130] > 51391683
	I0815 23:58:38.203712   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:58:38.213249   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:58:38.224383   49141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:58:38.228963   49141 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:58:38.229171   49141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:58:38.229225   49141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:58:38.234929   49141 command_runner.go:130] > 3ec20f2e
	I0815 23:58:38.235126   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:58:38.244980   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:58:38.256316   49141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:38.260799   49141 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:38.261004   49141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:38.261063   49141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:38.266988   49141 command_runner.go:130] > b5213941
	I0815 23:58:38.267048   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:58:38.276620   49141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:58:38.281371   49141 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:58:38.281396   49141 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0815 23:58:38.281404   49141 command_runner.go:130] > Device: 253,1	Inode: 6291478     Links: 1
	I0815 23:58:38.281415   49141 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 23:58:38.281439   49141 command_runner.go:130] > Access: 2024-08-15 23:51:58.469727512 +0000
	I0815 23:58:38.281449   49141 command_runner.go:130] > Modify: 2024-08-15 23:51:58.469727512 +0000
	I0815 23:58:38.281460   49141 command_runner.go:130] > Change: 2024-08-15 23:51:58.469727512 +0000
	I0815 23:58:38.281471   49141 command_runner.go:130] >  Birth: 2024-08-15 23:51:58.469727512 +0000
	I0815 23:58:38.281553   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 23:58:38.287324   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.287526   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 23:58:38.293324   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.293387   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 23:58:38.299799   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.299871   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 23:58:38.305358   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.305536   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 23:58:38.311149   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.311354   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 23:58:38.316709   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.316929   49141 kubeadm.go:392] StartCluster: {Name:multinode-145108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-145108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:58:38.317033   49141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 23:58:38.317098   49141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 23:58:38.360824   49141 command_runner.go:130] > a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20
	I0815 23:58:38.360851   49141 command_runner.go:130] > 7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e
	I0815 23:58:38.360860   49141 command_runner.go:130] > e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36
	I0815 23:58:38.360870   49141 command_runner.go:130] > 801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8
	I0815 23:58:38.360877   49141 command_runner.go:130] > a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a
	I0815 23:58:38.360886   49141 command_runner.go:130] > e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1
	I0815 23:58:38.360894   49141 command_runner.go:130] > e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781
	I0815 23:58:38.360911   49141 command_runner.go:130] > 3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9
	I0815 23:58:38.360954   49141 cri.go:89] found id: "a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20"
	I0815 23:58:38.360964   49141 cri.go:89] found id: "7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e"
	I0815 23:58:38.360968   49141 cri.go:89] found id: "e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36"
	I0815 23:58:38.360972   49141 cri.go:89] found id: "801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8"
	I0815 23:58:38.360975   49141 cri.go:89] found id: "a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a"
	I0815 23:58:38.360978   49141 cri.go:89] found id: "e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1"
	I0815 23:58:38.360981   49141 cri.go:89] found id: "e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781"
	I0815 23:58:38.360984   49141 cri.go:89] found id: "3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9"
	I0815 23:58:38.360986   49141 cri.go:89] found id: ""
	I0815 23:58:38.361026   49141 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.285474393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f35d6d38-329f-424c-9978-6974de1de528 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.287041886Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fe17ccc-16c9-4743-b250-01b128cd85d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.287454512Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766423287433149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fe17ccc-16c9-4743-b250-01b128cd85d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.287965535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf71d289-7a7b-4850-8170-85348ab67775 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.288019886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf71d289-7a7b-4850-8170-85348ab67775 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.288358773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f6dc7afc9283b35c7b80bfdb092e4ae3fe3d7e042fe4ed6c90e16ace9a20de,PodSandboxId:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723766359435754485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d,PodSandboxId:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723766325914164330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e,PodSandboxId:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723766325964131703,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5,PodSandboxId:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723766325823196155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05caa33dcdec812c8640535c6d52db6be57c9df197dc03574f9d85c016cdbc53,PodSandboxId:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723766325779392896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880,PodSandboxId:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723766320944894907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0,PodSandboxId:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723766320928146354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a,PodSandboxId:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723766320866828838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565,PodSandboxId:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723766320812977597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a9b72cbbbd778125d5a22fbf4e7a0a190ca5277ee444fe2c9cdf8e2f232a2a,PodSandboxId:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723765999938228931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20,PodSandboxId:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723765946451466934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e,PodSandboxId:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723765946441200334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36,PodSandboxId:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723765934667376993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8,PodSandboxId:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723765932331616310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1,PodSandboxId:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f64e5c859d58c22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723765922068555261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a,PodSandboxId:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723765922096588202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781,PodSandboxId:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723765922003966650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9,PodSandboxId:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723765921984188216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf71d289-7a7b-4850-8170-85348ab67775 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.302074248Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=fd228dcb-0622-4766-9c42-c12090aa5842 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.302432166Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-h45mw,Uid:de33a362-6df1-4a49-9c9f-bfbdb3c8183c,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723766359279391549,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:58:45.138377145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-4hjxz,Uid:c2521d34-15fc-4304-a3ae-7d9e95df6342,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1723766325580094426,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:58:45.138378356Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&PodSandboxMetadata{Name:kube-proxy-kcx86,Uid:ae10003b-b485-4db4-8649-bee882b1bbd0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723766325500778925,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-08-15T23:58:45.138373539Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&PodSandboxMetadata{Name:kindnet-s5nls,Uid:4cf7ba89-dc92-4ead-a84b-56dca892ab9f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723766325497824857,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:58:45.138380288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9cef8aec-1cd5-4251-aa88-a6dc5b398c12,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1723766325493347422,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T23:58:45.138379400Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-145108,Uid:d6abef8cf2f7b219d41ad3fd197a8d9b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723766320678596574,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d6abef8cf2f7b219d41ad3fd197a8d9b,kubernetes.io/config.seen: 2024-08-15T23:58:40.159834526Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&PodSandboxMetadata{Name:kube-controller-mana
ger-multinode-145108,Uid:f8bb1e0b7b05f4430922a4242347e8ea,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723766320676377039,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f8bb1e0b7b05f4430922a4242347e8ea,kubernetes.io/config.seen: 2024-08-15T23:58:40.159833221Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-145108,Uid:88f5d3acc91f539d7d95f3f990c1c4bf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723766320660355251,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-1
45108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.117:8443,kubernetes.io/config.hash: 88f5d3acc91f539d7d95f3f990c1c4bf,kubernetes.io/config.seen: 2024-08-15T23:58:40.159829421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&PodSandboxMetadata{Name:etcd-multinode-145108,Uid:1c6a42104da1631cd79aee1b5360fe02,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723766320649315867,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.117:2379,kuberne
tes.io/config.hash: 1c6a42104da1631cd79aee1b5360fe02,kubernetes.io/config.seen: 2024-08-15T23:58:40.159835510Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-h45mw,Uid:de33a362-6df1-4a49-9c9f-bfbdb3c8183c,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723765998942943256,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:53:17.135234945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9cef8aec-1cd5-4251-aa88-a6dc5b398c12,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1723765946257124263,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T23:52:25.928531094Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-4hjxz,Uid:c2521d34-15fc-4304-a3ae-7d9e95df6342,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723765946232415888,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:52:25.922059267Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&PodSandboxMetadata{Name:kube-proxy-kcx86,Uid:ae10003b-b485-4db4-8649-bee882b1bbd0,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723765932071976918,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:52:11.753571912Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&PodSandboxMetadata{Name:kindnet-s5nls,Uid:4cf7ba89-dc92-4ead-a84b-56dca892ab9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723765932055965311,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T23:52:11.746517526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&PodSandboxMetadata{Name:etcd-multinode-145108,Uid:1c6a42104da1631cd79aee1b5360fe02,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723765921830129735,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.117:2379,kubernetes.io/config.hash: 1c6a42104da1631cd79aee1b5360fe02,kubernetes.io/config.seen: 2024-08-15T23:52:01.346138860Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f6
4e5c859d58c22,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-145108,Uid:d6abef8cf2f7b219d41ad3fd197a8d9b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723765921822517298,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d6abef8cf2f7b219d41ad3fd197a8d9b,kubernetes.io/config.seen: 2024-08-15T23:52:01.346144811Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-145108,Uid:88f5d3acc91f539d7d95f3f990c1c4bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723765921811080973,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.117:8443,kubernetes.io/config.hash: 88f5d3acc91f539d7d95f3f990c1c4bf,kubernetes.io/config.seen: 2024-08-15T23:52:01.346142930Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-145108,Uid:f8bb1e0b7b05f4430922a4242347e8ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723765921804082248,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,tier: control-plane,},Annotati
ons:map[string]string{kubernetes.io/config.hash: f8bb1e0b7b05f4430922a4242347e8ea,kubernetes.io/config.seen: 2024-08-15T23:52:01.346144003Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fd228dcb-0622-4766-9c42-c12090aa5842 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.304298039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16c6f262-acf4-4789-ae10-41004b8d739d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.304355738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16c6f262-acf4-4789-ae10-41004b8d739d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.304981810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f6dc7afc9283b35c7b80bfdb092e4ae3fe3d7e042fe4ed6c90e16ace9a20de,PodSandboxId:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723766359435754485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d,PodSandboxId:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723766325914164330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e,PodSandboxId:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723766325964131703,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5,PodSandboxId:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723766325823196155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05caa33dcdec812c8640535c6d52db6be57c9df197dc03574f9d85c016cdbc53,PodSandboxId:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723766325779392896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880,PodSandboxId:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723766320944894907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0,PodSandboxId:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723766320928146354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a,PodSandboxId:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723766320866828838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565,PodSandboxId:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723766320812977597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a9b72cbbbd778125d5a22fbf4e7a0a190ca5277ee444fe2c9cdf8e2f232a2a,PodSandboxId:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723765999938228931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20,PodSandboxId:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723765946451466934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e,PodSandboxId:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723765946441200334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36,PodSandboxId:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723765934667376993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8,PodSandboxId:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723765932331616310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1,PodSandboxId:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f64e5c859d58c22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723765922068555261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a,PodSandboxId:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723765922096588202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781,PodSandboxId:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723765922003966650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9,PodSandboxId:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723765921984188216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16c6f262-acf4-4789-ae10-41004b8d739d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.334230387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14a69a1f-e464-472b-a9a5-c69ccbd32045 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.334320365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14a69a1f-e464-472b-a9a5-c69ccbd32045 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.336198209Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aaf3ce8f-0f13-4331-8c4a-27966d65b100 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.336798598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766423336773799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aaf3ce8f-0f13-4331-8c4a-27966d65b100 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.337329103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9428fcc-4835-4830-857d-2a454b8382a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.337382767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9428fcc-4835-4830-857d-2a454b8382a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.339146880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f6dc7afc9283b35c7b80bfdb092e4ae3fe3d7e042fe4ed6c90e16ace9a20de,PodSandboxId:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723766359435754485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d,PodSandboxId:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723766325914164330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e,PodSandboxId:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723766325964131703,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5,PodSandboxId:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723766325823196155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05caa33dcdec812c8640535c6d52db6be57c9df197dc03574f9d85c016cdbc53,PodSandboxId:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723766325779392896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880,PodSandboxId:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723766320944894907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0,PodSandboxId:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723766320928146354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a,PodSandboxId:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723766320866828838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565,PodSandboxId:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723766320812977597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a9b72cbbbd778125d5a22fbf4e7a0a190ca5277ee444fe2c9cdf8e2f232a2a,PodSandboxId:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723765999938228931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20,PodSandboxId:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723765946451466934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e,PodSandboxId:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723765946441200334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36,PodSandboxId:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723765934667376993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8,PodSandboxId:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723765932331616310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1,PodSandboxId:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f64e5c859d58c22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723765922068555261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a,PodSandboxId:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723765922096588202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781,PodSandboxId:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723765922003966650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9,PodSandboxId:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723765921984188216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9428fcc-4835-4830-857d-2a454b8382a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.382540126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37eb9645-e260-466f-a233-0f00fe12ca7c name=/runtime.v1.RuntimeService/Version
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.382633828Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37eb9645-e260-466f-a233-0f00fe12ca7c name=/runtime.v1.RuntimeService/Version
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.383845954Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cc16993-6b27-4d12-bdfb-5e96752123b9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.384265485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766423384240775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cc16993-6b27-4d12-bdfb-5e96752123b9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.384850616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2650ca20-6881-4003-8daa-e122dff608ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.384975901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2650ca20-6881-4003-8daa-e122dff608ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:00:23 multinode-145108 crio[2749]: time="2024-08-16 00:00:23.385378680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f6dc7afc9283b35c7b80bfdb092e4ae3fe3d7e042fe4ed6c90e16ace9a20de,PodSandboxId:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723766359435754485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d,PodSandboxId:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723766325914164330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e,PodSandboxId:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723766325964131703,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5,PodSandboxId:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723766325823196155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05caa33dcdec812c8640535c6d52db6be57c9df197dc03574f9d85c016cdbc53,PodSandboxId:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723766325779392896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880,PodSandboxId:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723766320944894907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0,PodSandboxId:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723766320928146354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a,PodSandboxId:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723766320866828838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565,PodSandboxId:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723766320812977597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a9b72cbbbd778125d5a22fbf4e7a0a190ca5277ee444fe2c9cdf8e2f232a2a,PodSandboxId:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723765999938228931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20,PodSandboxId:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723765946451466934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e,PodSandboxId:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723765946441200334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36,PodSandboxId:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723765934667376993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8,PodSandboxId:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723765932331616310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1,PodSandboxId:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f64e5c859d58c22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723765922068555261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a,PodSandboxId:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723765922096588202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781,PodSandboxId:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723765922003966650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9,PodSandboxId:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723765921984188216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2650ca20-6881-4003-8daa-e122dff608ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b1f6dc7afc928       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   252868a22af51       busybox-7dff88458-h45mw
	8eafd3504e17c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   63cf2848b334f       coredns-6f6b679f8f-4hjxz
	3b90353c92464       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   8494ba630f92f       kindnet-s5nls
	251584fcc4165       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   3b1f67d8681e2       kube-proxy-kcx86
	05caa33dcdec8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a5caefa2102df       storage-provisioner
	0cd0110364c13       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   2da5135fbede8       kube-controller-manager-multinode-145108
	dcbef7b5ec951       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   85f52cd98ab9c       kube-apiserver-multinode-145108
	d472b1ccc9ebf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   47870a76c14a5       kube-scheduler-multinode-145108
	95be1d8c42460       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   7ff998226c39d       etcd-multinode-145108
	57a9b72cbbbd7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   2f77bdc0a065f       busybox-7dff88458-h45mw
	a1f497b941980       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   cf2239096f991       coredns-6f6b679f8f-4hjxz
	7252e4597aa95       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   9b40c16717dfc       storage-provisioner
	e278f8b98e2fd       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   da85041659fd5       kindnet-s5nls
	801914e3b1224       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   3d83d3da8eb47       kube-proxy-kcx86
	a340571a03bb5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   62496da5bd532       etcd-multinode-145108
	e6d7a41786e3d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   9c338c9803f33       kube-scheduler-multinode-145108
	e6b50f7c9ea0b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   44def40a9dae1       kube-controller-manager-multinode-145108
	3297def808e61       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   2613471cfdea6       kube-apiserver-multinode-145108
	
	
	==> coredns [8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42333 - 44399 "HINFO IN 2718109776628081537.769365088325569420. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013436286s
	
	
	==> coredns [a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20] <==
	[INFO] 10.244.0.3:47841 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001745321s
	[INFO] 10.244.0.3:46215 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086963s
	[INFO] 10.244.0.3:46516 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062235s
	[INFO] 10.244.0.3:38734 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001012065s
	[INFO] 10.244.0.3:46251 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043544s
	[INFO] 10.244.0.3:45677 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000038037s
	[INFO] 10.244.0.3:38317 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034451s
	[INFO] 10.244.1.2:40203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148563s
	[INFO] 10.244.1.2:51238 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084161s
	[INFO] 10.244.1.2:51595 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	[INFO] 10.244.1.2:41913 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106578s
	[INFO] 10.244.0.3:59324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000070871s
	[INFO] 10.244.0.3:43911 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000041772s
	[INFO] 10.244.0.3:50977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036786s
	[INFO] 10.244.0.3:42748 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048599s
	[INFO] 10.244.1.2:60117 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000214611s
	[INFO] 10.244.1.2:41113 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116508s
	[INFO] 10.244.1.2:50876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.001595719s
	[INFO] 10.244.1.2:44338 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159898s
	[INFO] 10.244.0.3:53942 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094778s
	[INFO] 10.244.0.3:60634 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109391s
	[INFO] 10.244.0.3:58182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077413s
	[INFO] 10.244.0.3:39166 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126142s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-145108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-145108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=multinode-145108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T23_52_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:52:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-145108
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:00:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:58:44 +0000   Thu, 15 Aug 2024 23:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:58:44 +0000   Thu, 15 Aug 2024 23:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:58:44 +0000   Thu, 15 Aug 2024 23:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:58:44 +0000   Thu, 15 Aug 2024 23:52:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    multinode-145108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5243210c7e140159abc9e09b0caa559
	  System UUID:                a5243210-c7e1-4015-9abc-9e09b0caa559
	  Boot ID:                    3739afea-d7f7-47db-94c7-f132f026a571
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h45mw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 coredns-6f6b679f8f-4hjxz                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m11s
	  kube-system                 etcd-multinode-145108                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m16s
	  kube-system                 kindnet-s5nls                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m12s
	  kube-system                 kube-apiserver-multinode-145108             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-controller-manager-multinode-145108    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-proxy-kcx86                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-scheduler-multinode-145108             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m10s                  kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  Starting                 8m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node multinode-145108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node multinode-145108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m22s (x7 over 8m22s)  kubelet          Node multinode-145108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m16s                  kubelet          Node multinode-145108 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m16s                  kubelet          Node multinode-145108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m16s                  kubelet          Node multinode-145108 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m16s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m12s                  node-controller  Node multinode-145108 event: Registered Node multinode-145108 in Controller
	  Normal  NodeReady                7m58s                  kubelet          Node multinode-145108 status is now: NodeReady
	  Normal  Starting                 103s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)    kubelet          Node multinode-145108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)    kubelet          Node multinode-145108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)    kubelet          Node multinode-145108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                    node-controller  Node multinode-145108 event: Registered Node multinode-145108 in Controller
	
	
	Name:               multinode-145108-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-145108-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=multinode-145108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_59_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-145108-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:00:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:59:55 +0000   Thu, 15 Aug 2024 23:59:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:59:55 +0000   Thu, 15 Aug 2024 23:59:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:59:55 +0000   Thu, 15 Aug 2024 23:59:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:59:55 +0000   Thu, 15 Aug 2024 23:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    multinode-145108-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3056b26dac145c1a78441e7444d5ce4
	  System UUID:                d3056b26-dac1-45c1-a784-41e7444d5ce4
	  Boot ID:                    62af9a1c-448b-4ea1-a152-dbe385f49419
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tj29q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kindnet-5zpnl              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m27s
	  kube-system                 kube-proxy-5t9th           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m23s                  kube-proxy  
	  Normal  Starting                 54s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m27s (x2 over 7m27s)  kubelet     Node multinode-145108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m27s (x2 over 7m27s)  kubelet     Node multinode-145108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m27s (x2 over 7m27s)  kubelet     Node multinode-145108-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m8s                   kubelet     Node multinode-145108-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)      kubelet     Node multinode-145108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)      kubelet     Node multinode-145108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)      kubelet     Node multinode-145108-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-145108-m02 status is now: NodeReady
	
	
	Name:               multinode-145108-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-145108-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=multinode-145108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T00_00_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 00:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-145108-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:00:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 00:00:20 +0000   Fri, 16 Aug 2024 00:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 00:00:20 +0000   Fri, 16 Aug 2024 00:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 00:00:20 +0000   Fri, 16 Aug 2024 00:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 00:00:20 +0000   Fri, 16 Aug 2024 00:00:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    multinode-145108-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8271e0dada443be81c0258069e57d12
	  System UUID:                d8271e0d-ada4-43be-81c0-258069e57d12
	  Boot ID:                    52eca14a-20bc-427c-91b0-7047ee027b04
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kdng6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m31s
	  kube-system                 kube-proxy-2tpvm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From           Message
	  ----    ------                   ----                   ----           -------
	  Normal  Starting                 5m39s                  kube-proxy     
	  Normal  Starting                 6m26s                  kube-proxy     
	  Normal  Starting                 16s                    kube-proxy     
	  Normal  NodeHasSufficientMemory  6m31s (x2 over 6m31s)  kubelet        Node multinode-145108-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s (x2 over 6m31s)  kubelet        Node multinode-145108-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s (x2 over 6m31s)  kubelet        Node multinode-145108-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m13s                  kubelet        Node multinode-145108-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet        Node multinode-145108-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet        Node multinode-145108-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet        Node multinode-145108-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m26s                  kubelet        Node multinode-145108-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     21s                    cidrAllocator  Node multinode-145108-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet        Node multinode-145108-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet        Node multinode-145108-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet        Node multinode-145108-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet        Node multinode-145108-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058934] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.166644] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.145408] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.271277] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.056455] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.130716] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[Aug15 23:52] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.007148] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.081550] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.220077] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[  +0.025659] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.139634] kauditd_printk_skb: 60 callbacks suppressed
	[Aug15 23:53] kauditd_printk_skb: 12 callbacks suppressed
	[Aug15 23:58] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.148213] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.194202] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.157191] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.321309] systemd-fstab-generator[2734]: Ignoring "noauto" option for root device
	[  +2.343540] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +2.164899] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.082563] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.631555] kauditd_printk_skb: 52 callbacks suppressed
	[ +14.354059] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +0.094025] kauditd_printk_skb: 34 callbacks suppressed
	[Aug15 23:59] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565] <==
	{"level":"info","ts":"2024-08-15T23:58:41.241791Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","added-peer-id":"d85ef093c7464643","added-peer-peer-urls":["https://192.168.39.117:2380"]}
	{"level":"info","ts":"2024-08-15T23:58:41.241905Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:58:41.243566Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T23:58:41.243828Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:58:41.243897Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:58:41.247952Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d85ef093c7464643","initial-advertise-peer-urls":["https://192.168.39.117:2380"],"listen-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.117:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T23:58:41.248086Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T23:58:41.248298Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-08-15T23:58:41.248308Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-08-15T23:58:42.992786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:58:42.992862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:58:42.992909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 received MsgPreVoteResp from d85ef093c7464643 at term 2"}
	{"level":"info","ts":"2024-08-15T23:58:42.992928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T23:58:42.992934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 received MsgVoteResp from d85ef093c7464643 at term 3"}
	{"level":"info","ts":"2024-08-15T23:58:42.992943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became leader at term 3"}
	{"level":"info","ts":"2024-08-15T23:58:42.992951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d85ef093c7464643 elected leader d85ef093c7464643 at term 3"}
	{"level":"info","ts":"2024-08-15T23:58:42.995911Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d85ef093c7464643","local-member-attributes":"{Name:multinode-145108 ClientURLs:[https://192.168.39.117:2379]}","request-path":"/0/members/d85ef093c7464643/attributes","cluster-id":"44831ab0f42e7be7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T23:58:42.995937Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:58:42.997135Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:58:42.997318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T23:58:42.997364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T23:58:42.997196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:58:42.998552Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:58:42.999360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T23:58:43.000332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.117:2379"}
	
	
	==> etcd [a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a] <==
	{"level":"info","ts":"2024-08-15T23:52:03.217967Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d85ef093c7464643","local-member-attributes":"{Name:multinode-145108 ClientURLs:[https://192.168.39.117:2379]}","request-path":"/0/members/d85ef093c7464643/attributes","cluster-id":"44831ab0f42e7be7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T23:52:03.218139Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:52:03.218542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:52:03.218635Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:52:03.222076Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:52:03.222120Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:52:03.218716Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T23:52:03.222162Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T23:52:03.219281Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:52:03.225195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.117:2379"}
	{"level":"info","ts":"2024-08-15T23:52:03.226497Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:52:03.227221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-15T23:53:02.962159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.266673ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T23:53:02.962341Z","caller":"traceutil/trace.go:171","msg":"trace[394562956] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:517; }","duration":"101.535089ms","start":"2024-08-15T23:53:02.860783Z","end":"2024-08-15T23:53:02.962318Z","steps":["trace[394562956] 'range keys from in-memory index tree'  (duration: 101.24518ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T23:53:52.548229Z","caller":"traceutil/trace.go:171","msg":"trace[1536968948] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"217.210835ms","start":"2024-08-15T23:53:52.330982Z","end":"2024-08-15T23:53:52.548193Z","steps":["trace[1536968948] 'process raft request'  (duration: 121.827777ms)","trace[1536968948] 'compare'  (duration: 95.170242ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T23:57:03.319291Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T23:57:03.319414Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-145108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	{"level":"warn","ts":"2024-08-15T23:57:03.320149Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:57:03.320290Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:57:03.401521Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:57:03.401793Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T23:57:03.401939Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d85ef093c7464643","current-leader-member-id":"d85ef093c7464643"}
	{"level":"info","ts":"2024-08-15T23:57:03.404510Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-08-15T23:57:03.404726Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-08-15T23:57:03.404763Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-145108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	
	
	==> kernel <==
	 00:00:23 up 8 min,  0 users,  load average: 0.12, 0.18, 0.10
	Linux multinode-145108 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d] <==
	I0815 23:59:36.937130       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:59:46.935266       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:59:46.935416       1 main.go:299] handling current node
	I0815 23:59:46.935450       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:59:46.935468       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:59:46.935712       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:59:46.935772       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:59:56.935408       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:59:56.935460       1 main.go:299] handling current node
	I0815 23:59:56.935475       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:59:56.935481       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:59:56.935620       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:59:56.935763       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0816 00:00:06.936376       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0816 00:00:06.936512       1 main.go:299] handling current node
	I0816 00:00:06.936559       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0816 00:00:06.936568       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0816 00:00:06.936816       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0816 00:00:06.936852       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.2.0/24] 
	I0816 00:00:16.937775       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0816 00:00:16.937848       1 main.go:299] handling current node
	I0816 00:00:16.937866       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0816 00:00:16.937872       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0816 00:00:16.938312       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0816 00:00:16.938537       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36] <==
	I0815 23:56:15.726230       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:25.731924       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:56:25.732063       1 main.go:299] handling current node
	I0815 23:56:25.732125       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:56:25.732148       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:56:25.732307       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:56:25.732331       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:35.725757       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:56:35.725836       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:56:35.726068       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:56:35.726105       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:35.726271       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:56:35.726305       1 main.go:299] handling current node
	I0815 23:56:45.725899       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:56:45.726009       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:56:45.726185       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:56:45.726208       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:45.726273       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:56:45.726292       1 main.go:299] handling current node
	I0815 23:56:55.731095       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:56:55.731143       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:56:55.731281       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:56:55.731305       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:55.731356       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:56:55.731377       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9] <==
	I0815 23:57:03.345471       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:57:03.346230       1 controller.go:157] Shutting down quota evaluator
	I0815 23:57:03.346284       1 controller.go:176] quota evaluator worker shutdown
	I0815 23:57:03.347371       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0815 23:57:03.347514       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 23:57:03.348009       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0815 23:57:03.348748       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:57:03.349033       1 controller.go:176] quota evaluator worker shutdown
	I0815 23:57:03.349070       1 controller.go:176] quota evaluator worker shutdown
	I0815 23:57:03.349094       1 controller.go:176] quota evaluator worker shutdown
	I0815 23:57:03.349117       1 controller.go:176] quota evaluator worker shutdown
	E0815 23:57:03.350309       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0815 23:57:03.350557       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0815 23:57:03.352549       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.352930       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353023       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353085       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353145       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353208       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353267       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353420       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353732       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353826       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353906       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353966       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0] <==
	I0815 23:58:44.394185       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 23:58:44.394410       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 23:58:44.394468       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:58:44.395597       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:58:44.398596       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 23:58:44.401854       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 23:58:44.403061       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:58:44.415875       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0815 23:58:44.419368       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0815 23:58:44.424619       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:58:44.424739       1 policy_source.go:224] refreshing policies
	I0815 23:58:44.461232       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:58:44.461325       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:58:44.461358       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:58:44.461838       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:58:44.461978       1 cache.go:39] Caches are synced for autoregister controller
	I0815 23:58:44.510208       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:58:45.310297       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 23:58:46.792363       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 23:58:46.942988       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 23:58:46.958722       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 23:58:47.046139       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 23:58:47.053419       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 23:58:47.816628       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 23:58:48.009409       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880] <==
	I0815 23:59:45.257228       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.076548ms"
	I0815 23:59:45.258387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.254µs"
	I0815 23:59:47.820081       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	I0815 23:59:55.643933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	I0816 00:00:00.932515       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:00.951170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:01.185288       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0816 00:00:01.185295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:02.531872       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0816 00:00:02.531976       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-145108-m03\" does not exist"
	I0816 00:00:02.560079       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-145108-m03" podCIDRs=["10.244.2.0/24"]
	I0816 00:00:02.560121       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	E0816 00:00:02.569249       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-145108-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-145108-m03" podCIDRs=["10.244.3.0/24"]
	E0816 00:00:02.569339       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-145108-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-145108-m03"
	E0816 00:00:02.569386       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-145108-m03': failed to patch node CIDR: Node \"multinode-145108-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0816 00:00:02.569406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:02.575342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:02.703931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:02.871961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:03.063092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:12.605013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:20.448429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:20.449155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0816 00:00:20.460784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:22.840738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	
	
	==> kube-controller-manager [e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781] <==
	I0815 23:54:38.946599       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:38.947289       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0815 23:54:39.979862       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-145108-m03\" does not exist"
	I0815 23:54:39.981807       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0815 23:54:39.994286       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-145108-m03" podCIDRs=["10.244.3.0/24"]
	I0815 23:54:39.994313       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:39.994432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:40.003001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:40.013259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:40.355210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:41.397863       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:50.026160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:57.804294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:57.804410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0815 23:54:57.819463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:01.339422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:41.358168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m03"
	I0815 23:55:41.358216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	I0815 23:55:41.362153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:41.378289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	I0815 23:55:41.386574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:41.393354       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.808077ms"
	I0815 23:55:41.393599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.287µs"
	I0815 23:55:46.431745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:56.508029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	
	
	==> kube-proxy [251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:58:46.119797       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:58:46.138055       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.117"]
	E0815 23:58:46.138569       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:58:46.220389       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:58:46.220444       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:58:46.220475       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:58:46.231550       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:58:46.232620       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:58:46.232833       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:58:46.234085       1 config.go:197] "Starting service config controller"
	I0815 23:58:46.234145       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:58:46.234189       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:58:46.234205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:58:46.234758       1 config.go:326] "Starting node config controller"
	I0815 23:58:46.234826       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:58:46.335474       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:58:46.335519       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:58:46.335530       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:52:13.077594       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:52:13.088165       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.117"]
	E0815 23:52:13.088248       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:52:13.137782       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:52:13.137842       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:52:13.137873       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:52:13.140363       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:52:13.140594       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:52:13.140622       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:52:13.145277       1 config.go:197] "Starting service config controller"
	I0815 23:52:13.145331       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:52:13.145363       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:52:13.145367       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:52:13.149566       1 config.go:326] "Starting node config controller"
	I0815 23:52:13.149629       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:52:13.249265       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:52:13.249346       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:52:13.267065       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a] <==
	I0815 23:58:42.243434       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:58:44.432527       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 23:58:44.432571       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:58:44.437148       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 23:58:44.437284       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0815 23:58:44.437336       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0815 23:58:44.437403       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:58:44.442358       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 23:58:44.442374       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:58:44.442390       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0815 23:58:44.442395       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0815 23:58:44.538093       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0815 23:58:44.543035       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0815 23:58:44.543037       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1] <==
	W0815 23:52:04.669729       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 23:52:04.672929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:04.669807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 23:52:04.672953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0815 23:52:04.669933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.516564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 23:52:05.516619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.659525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 23:52:05.659581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.689555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 23:52:05.689703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.797320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 23:52:05.797464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.835979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 23:52:05.836058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.856912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 23:52:05.857037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.865744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 23:52:05.865856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.905328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 23:52:05.905462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:06.135124       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 23:52:06.135514       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 23:52:07.831260       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:57:03.315616       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 15 23:58:50 multinode-145108 kubelet[2959]: E0815 23:58:50.225338    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766330221074161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:58:50 multinode-145108 kubelet[2959]: E0815 23:58:50.225738    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766330221074161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:00 multinode-145108 kubelet[2959]: E0815 23:59:00.227404    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766340227078978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:00 multinode-145108 kubelet[2959]: E0815 23:59:00.227435    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766340227078978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:10 multinode-145108 kubelet[2959]: E0815 23:59:10.229712    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766350229264299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:10 multinode-145108 kubelet[2959]: E0815 23:59:10.229801    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766350229264299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:20 multinode-145108 kubelet[2959]: E0815 23:59:20.233947    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766360233144934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:20 multinode-145108 kubelet[2959]: E0815 23:59:20.234135    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766360233144934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:30 multinode-145108 kubelet[2959]: E0815 23:59:30.236753    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766370236235277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:30 multinode-145108 kubelet[2959]: E0815 23:59:30.237245    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766370236235277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:40 multinode-145108 kubelet[2959]: E0815 23:59:40.238853    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766380238488632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:40 multinode-145108 kubelet[2959]: E0815 23:59:40.238899    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766380238488632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:40 multinode-145108 kubelet[2959]: E0815 23:59:40.249749    2959 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 23:59:40 multinode-145108 kubelet[2959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 23:59:40 multinode-145108 kubelet[2959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 23:59:40 multinode-145108 kubelet[2959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 23:59:40 multinode-145108 kubelet[2959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 23:59:50 multinode-145108 kubelet[2959]: E0815 23:59:50.240347    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766390240124902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 23:59:50 multinode-145108 kubelet[2959]: E0815 23:59:50.240391    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766390240124902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:00:00 multinode-145108 kubelet[2959]: E0816 00:00:00.243948    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766400242466882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:00:00 multinode-145108 kubelet[2959]: E0816 00:00:00.244060    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766400242466882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:00:10 multinode-145108 kubelet[2959]: E0816 00:00:10.245697    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766410245302262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:00:10 multinode-145108 kubelet[2959]: E0816 00:00:10.246091    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766410245302262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:00:20 multinode-145108 kubelet[2959]: E0816 00:00:20.248596    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766420248025127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:00:20 multinode-145108 kubelet[2959]: E0816 00:00:20.249225    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766420248025127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:00:22.979232   50292 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19452-12919/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-145108 -n multinode-145108
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-145108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-145108 stop: exit status 82 (2m0.480355806s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-145108-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-145108 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-145108 status: exit status 3 (18.734354737s)

                                                
                                                
-- stdout --
	multinode-145108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-145108-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:02:46.378151   50970 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0816 00:02:46.378184   50970 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-145108 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-145108 -n multinode-145108
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-145108 logs -n 25: (1.539656114s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m02:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108:/home/docker/cp-test_multinode-145108-m02_multinode-145108.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n multinode-145108 sudo cat                                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /home/docker/cp-test_multinode-145108-m02_multinode-145108.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m02:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03:/home/docker/cp-test_multinode-145108-m02_multinode-145108-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n multinode-145108-m03 sudo cat                                   | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /home/docker/cp-test_multinode-145108-m02_multinode-145108-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp testdata/cp-test.txt                                                | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1410064125/001/cp-test_multinode-145108-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108:/home/docker/cp-test_multinode-145108-m03_multinode-145108.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n multinode-145108 sudo cat                                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /home/docker/cp-test_multinode-145108-m03_multinode-145108.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt                       | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m02:/home/docker/cp-test_multinode-145108-m03_multinode-145108-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n                                                                 | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | multinode-145108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-145108 ssh -n multinode-145108-m02 sudo cat                                   | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | /home/docker/cp-test_multinode-145108-m03_multinode-145108-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-145108 node stop m03                                                          | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	| node    | multinode-145108 node start                                                             | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:54 UTC | 15 Aug 24 23:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-145108                                                                | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:55 UTC |                     |
	| stop    | -p multinode-145108                                                                     | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:55 UTC |                     |
	| start   | -p multinode-145108                                                                     | multinode-145108 | jenkins | v1.33.1 | 15 Aug 24 23:57 UTC | 16 Aug 24 00:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-145108                                                                | multinode-145108 | jenkins | v1.33.1 | 16 Aug 24 00:00 UTC |                     |
	| node    | multinode-145108 node delete                                                            | multinode-145108 | jenkins | v1.33.1 | 16 Aug 24 00:00 UTC | 16 Aug 24 00:00 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-145108 stop                                                                   | multinode-145108 | jenkins | v1.33.1 | 16 Aug 24 00:00 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:57:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:57:02.380271   49141 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:57:02.380514   49141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:57:02.380521   49141 out.go:358] Setting ErrFile to fd 2...
	I0815 23:57:02.380526   49141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:57:02.380690   49141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:57:02.381224   49141 out.go:352] Setting JSON to false
	I0815 23:57:02.382154   49141 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5922,"bootTime":1723760300,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:57:02.382216   49141 start.go:139] virtualization: kvm guest
	I0815 23:57:02.385163   49141 out.go:177] * [multinode-145108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:57:02.386664   49141 notify.go:220] Checking for updates...
	I0815 23:57:02.386667   49141 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:57:02.388072   49141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:57:02.389330   49141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:57:02.390590   49141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:57:02.391918   49141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:57:02.393345   49141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:57:02.395277   49141 config.go:182] Loaded profile config "multinode-145108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:57:02.395390   49141 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:57:02.395953   49141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:57:02.395998   49141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:57:02.411135   49141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0815 23:57:02.411528   49141 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:57:02.412150   49141 main.go:141] libmachine: Using API Version  1
	I0815 23:57:02.412171   49141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:57:02.412574   49141 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:57:02.412858   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:57:02.450542   49141 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 23:57:02.451891   49141 start.go:297] selected driver: kvm2
	I0815 23:57:02.451915   49141 start.go:901] validating driver "kvm2" against &{Name:multinode-145108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-145108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:57:02.452056   49141 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:57:02.452379   49141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:57:02.452448   49141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:57:02.467790   49141 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:57:02.468458   49141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 23:57:02.468489   49141 cni.go:84] Creating CNI manager for ""
	I0815 23:57:02.468496   49141 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 23:57:02.468554   49141 start.go:340] cluster config:
	{Name:multinode-145108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-145108 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:57:02.468668   49141 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:57:02.470507   49141 out.go:177] * Starting "multinode-145108" primary control-plane node in "multinode-145108" cluster
	I0815 23:57:02.471752   49141 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:57:02.471791   49141 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:57:02.471800   49141 cache.go:56] Caching tarball of preloaded images
	I0815 23:57:02.471890   49141 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 23:57:02.471904   49141 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:57:02.472043   49141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/config.json ...
	I0815 23:57:02.472253   49141 start.go:360] acquireMachinesLock for multinode-145108: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 23:57:02.472300   49141 start.go:364] duration metric: took 28.897µs to acquireMachinesLock for "multinode-145108"
	I0815 23:57:02.472315   49141 start.go:96] Skipping create...Using existing machine configuration
	I0815 23:57:02.472324   49141 fix.go:54] fixHost starting: 
	I0815 23:57:02.472636   49141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:57:02.472673   49141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:57:02.487042   49141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38973
	I0815 23:57:02.487444   49141 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:57:02.487885   49141 main.go:141] libmachine: Using API Version  1
	I0815 23:57:02.487903   49141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:57:02.488213   49141 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:57:02.488394   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:57:02.488525   49141 main.go:141] libmachine: (multinode-145108) Calling .GetState
	I0815 23:57:02.490017   49141 fix.go:112] recreateIfNeeded on multinode-145108: state=Running err=<nil>
	W0815 23:57:02.490051   49141 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 23:57:02.492254   49141 out.go:177] * Updating the running kvm2 "multinode-145108" VM ...
	I0815 23:57:02.493481   49141 machine.go:93] provisionDockerMachine start ...
	I0815 23:57:02.493499   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:57:02.493695   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:02.496544   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.497024   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.497043   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.497217   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:02.497404   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.497580   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.497703   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:02.497856   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:57:02.498056   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:57:02.498068   49141 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 23:57:02.603695   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-145108
	
	I0815 23:57:02.603727   49141 main.go:141] libmachine: (multinode-145108) Calling .GetMachineName
	I0815 23:57:02.603948   49141 buildroot.go:166] provisioning hostname "multinode-145108"
	I0815 23:57:02.603974   49141 main.go:141] libmachine: (multinode-145108) Calling .GetMachineName
	I0815 23:57:02.604202   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:02.606591   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.606916   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.606945   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.607129   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:02.607315   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.607469   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.607605   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:02.607763   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:57:02.607976   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:57:02.607992   49141 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-145108 && echo "multinode-145108" | sudo tee /etc/hostname
	I0815 23:57:02.725278   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-145108
	
	I0815 23:57:02.725304   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:02.728177   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.728552   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.728586   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.728797   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:02.728997   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.729258   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:02.729375   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:02.729536   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:57:02.729735   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:57:02.729753   49141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-145108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-145108/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-145108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 23:57:02.835741   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 23:57:02.835775   49141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0815 23:57:02.835806   49141 buildroot.go:174] setting up certificates
	I0815 23:57:02.835818   49141 provision.go:84] configureAuth start
	I0815 23:57:02.835827   49141 main.go:141] libmachine: (multinode-145108) Calling .GetMachineName
	I0815 23:57:02.836099   49141 main.go:141] libmachine: (multinode-145108) Calling .GetIP
	I0815 23:57:02.838788   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.839132   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.839158   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.839318   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:02.841862   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.842341   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:02.842361   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:02.842558   49141 provision.go:143] copyHostCerts
	I0815 23:57:02.842584   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:57:02.842624   49141 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0815 23:57:02.842638   49141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0815 23:57:02.842705   49141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0815 23:57:02.842809   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:57:02.842828   49141 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0815 23:57:02.842833   49141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0815 23:57:02.842857   49141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0815 23:57:02.842915   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:57:02.842957   49141 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0815 23:57:02.842963   49141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0815 23:57:02.842984   49141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0815 23:57:02.843080   49141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.multinode-145108 san=[127.0.0.1 192.168.39.117 localhost minikube multinode-145108]
	I0815 23:57:03.023336   49141 provision.go:177] copyRemoteCerts
	I0815 23:57:03.023394   49141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 23:57:03.023420   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:03.026344   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:03.026661   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:03.026680   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:03.026887   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:03.027056   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:03.027235   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:03.027374   49141 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:57:03.111445   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 23:57:03.111513   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 23:57:03.139651   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 23:57:03.139714   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0815 23:57:03.168877   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 23:57:03.168948   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 23:57:03.194569   49141 provision.go:87] duration metric: took 358.737626ms to configureAuth
	I0815 23:57:03.194599   49141 buildroot.go:189] setting minikube options for container-runtime
	I0815 23:57:03.194842   49141 config.go:182] Loaded profile config "multinode-145108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:57:03.194924   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:57:03.197637   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:03.198051   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:57:03.198080   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:57:03.198333   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:57:03.198536   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:03.198718   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:57:03.198839   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:57:03.198989   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:57:03.199206   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:57:03.199222   49141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 23:58:33.974079   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 23:58:33.974126   49141 machine.go:96] duration metric: took 1m31.480631427s to provisionDockerMachine
	I0815 23:58:33.974147   49141 start.go:293] postStartSetup for "multinode-145108" (driver="kvm2")
	I0815 23:58:33.974167   49141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 23:58:33.974195   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:33.974536   49141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 23:58:33.974586   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:58:33.977782   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:33.978267   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:33.978297   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:33.978445   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:58:33.978675   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:33.978835   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:58:33.978967   49141 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:58:34.061780   49141 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 23:58:34.066190   49141 command_runner.go:130] > NAME=Buildroot
	I0815 23:58:34.066207   49141 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0815 23:58:34.066212   49141 command_runner.go:130] > ID=buildroot
	I0815 23:58:34.066223   49141 command_runner.go:130] > VERSION_ID=2023.02.9
	I0815 23:58:34.066228   49141 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0815 23:58:34.066260   49141 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 23:58:34.066279   49141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0815 23:58:34.066350   49141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0815 23:58:34.066417   49141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0815 23:58:34.066428   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /etc/ssl/certs/200782.pem
	I0815 23:58:34.066509   49141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 23:58:34.076221   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:58:34.100977   49141 start.go:296] duration metric: took 126.813763ms for postStartSetup
	I0815 23:58:34.101028   49141 fix.go:56] duration metric: took 1m31.628707548s for fixHost
	I0815 23:58:34.101054   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:58:34.103594   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.103981   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:34.104002   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.104218   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:58:34.104424   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:34.104579   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:34.104707   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:58:34.104868   49141 main.go:141] libmachine: Using SSH client type: native
	I0815 23:58:34.105029   49141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.117 22 <nil> <nil>}
	I0815 23:58:34.105039   49141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 23:58:34.207057   49141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723766314.184559552
	
	I0815 23:58:34.207086   49141 fix.go:216] guest clock: 1723766314.184559552
	I0815 23:58:34.207106   49141 fix.go:229] Guest: 2024-08-15 23:58:34.184559552 +0000 UTC Remote: 2024-08-15 23:58:34.101036221 +0000 UTC m=+91.753799094 (delta=83.523331ms)
	I0815 23:58:34.207142   49141 fix.go:200] guest clock delta is within tolerance: 83.523331ms
	I0815 23:58:34.207152   49141 start.go:83] releasing machines lock for "multinode-145108", held for 1m31.734841259s
	I0815 23:58:34.207181   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:34.207419   49141 main.go:141] libmachine: (multinode-145108) Calling .GetIP
	I0815 23:58:34.210175   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.210576   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:34.210606   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.210765   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:34.211262   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:34.211449   49141 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:58:34.211556   49141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 23:58:34.211597   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:58:34.211663   49141 ssh_runner.go:195] Run: cat /version.json
	I0815 23:58:34.211686   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:58:34.214002   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.214314   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.214366   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:34.214390   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.214523   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:58:34.214688   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:34.214783   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:34.214820   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:34.214875   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:58:34.214963   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:58:34.215017   49141 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:58:34.215096   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:58:34.215224   49141 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:58:34.215365   49141 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:58:34.291012   49141 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0815 23:58:34.307997   49141 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0815 23:58:34.308831   49141 ssh_runner.go:195] Run: systemctl --version
	I0815 23:58:34.314815   49141 command_runner.go:130] > systemd 252 (252)
	I0815 23:58:34.314868   49141 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0815 23:58:34.314973   49141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 23:58:34.477295   49141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 23:58:34.483852   49141 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0815 23:58:34.483886   49141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 23:58:34.483936   49141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 23:58:34.493475   49141 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 23:58:34.493499   49141 start.go:495] detecting cgroup driver to use...
	I0815 23:58:34.493551   49141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 23:58:34.509970   49141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 23:58:34.524971   49141 docker.go:217] disabling cri-docker service (if available) ...
	I0815 23:58:34.525040   49141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 23:58:34.539462   49141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 23:58:34.554484   49141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 23:58:34.702198   49141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 23:58:34.853294   49141 docker.go:233] disabling docker service ...
	I0815 23:58:34.853369   49141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 23:58:34.873514   49141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 23:58:34.888277   49141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 23:58:35.045106   49141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 23:58:35.209001   49141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 23:58:35.224107   49141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 23:58:35.243864   49141 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0815 23:58:35.244323   49141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 23:58:35.244383   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.255632   49141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 23:58:35.255703   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.267712   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.278874   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.291119   49141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 23:58:35.302671   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.313792   49141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.325718   49141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 23:58:35.337136   49141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 23:58:35.348605   49141 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0815 23:58:35.348686   49141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 23:58:35.365577   49141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:58:35.525361   49141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 23:58:37.401040   49141 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.875634112s)
	I0815 23:58:37.401086   49141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 23:58:37.401143   49141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 23:58:37.409642   49141 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0815 23:58:37.409671   49141 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0815 23:58:37.409681   49141 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0815 23:58:37.409689   49141 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 23:58:37.409696   49141 command_runner.go:130] > Access: 2024-08-15 23:58:37.272665430 +0000
	I0815 23:58:37.409706   49141 command_runner.go:130] > Modify: 2024-08-15 23:58:37.271665407 +0000
	I0815 23:58:37.409715   49141 command_runner.go:130] > Change: 2024-08-15 23:58:37.271665407 +0000
	I0815 23:58:37.409724   49141 command_runner.go:130] >  Birth: -
	I0815 23:58:37.409773   49141 start.go:563] Will wait 60s for crictl version
	I0815 23:58:37.409826   49141 ssh_runner.go:195] Run: which crictl
	I0815 23:58:37.413868   49141 command_runner.go:130] > /usr/bin/crictl
	I0815 23:58:37.413942   49141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 23:58:37.448793   49141 command_runner.go:130] > Version:  0.1.0
	I0815 23:58:37.448817   49141 command_runner.go:130] > RuntimeName:  cri-o
	I0815 23:58:37.448824   49141 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0815 23:58:37.448831   49141 command_runner.go:130] > RuntimeApiVersion:  v1
	I0815 23:58:37.448953   49141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 23:58:37.449045   49141 ssh_runner.go:195] Run: crio --version
	I0815 23:58:37.477379   49141 command_runner.go:130] > crio version 1.29.1
	I0815 23:58:37.477403   49141 command_runner.go:130] > Version:        1.29.1
	I0815 23:58:37.477411   49141 command_runner.go:130] > GitCommit:      unknown
	I0815 23:58:37.477417   49141 command_runner.go:130] > GitCommitDate:  unknown
	I0815 23:58:37.477423   49141 command_runner.go:130] > GitTreeState:   clean
	I0815 23:58:37.477431   49141 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0815 23:58:37.477437   49141 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 23:58:37.477443   49141 command_runner.go:130] > Compiler:       gc
	I0815 23:58:37.477460   49141 command_runner.go:130] > Platform:       linux/amd64
	I0815 23:58:37.477472   49141 command_runner.go:130] > Linkmode:       dynamic
	I0815 23:58:37.477478   49141 command_runner.go:130] > BuildTags:      
	I0815 23:58:37.477486   49141 command_runner.go:130] >   containers_image_ostree_stub
	I0815 23:58:37.477494   49141 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 23:58:37.477502   49141 command_runner.go:130] >   btrfs_noversion
	I0815 23:58:37.477507   49141 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 23:58:37.477514   49141 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 23:58:37.477518   49141 command_runner.go:130] >   seccomp
	I0815 23:58:37.477523   49141 command_runner.go:130] > LDFlags:          unknown
	I0815 23:58:37.477527   49141 command_runner.go:130] > SeccompEnabled:   true
	I0815 23:58:37.477532   49141 command_runner.go:130] > AppArmorEnabled:  false
	I0815 23:58:37.477597   49141 ssh_runner.go:195] Run: crio --version
	I0815 23:58:37.512400   49141 command_runner.go:130] > crio version 1.29.1
	I0815 23:58:37.512419   49141 command_runner.go:130] > Version:        1.29.1
	I0815 23:58:37.512425   49141 command_runner.go:130] > GitCommit:      unknown
	I0815 23:58:37.512429   49141 command_runner.go:130] > GitCommitDate:  unknown
	I0815 23:58:37.512433   49141 command_runner.go:130] > GitTreeState:   clean
	I0815 23:58:37.512441   49141 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0815 23:58:37.512448   49141 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 23:58:37.512452   49141 command_runner.go:130] > Compiler:       gc
	I0815 23:58:37.512456   49141 command_runner.go:130] > Platform:       linux/amd64
	I0815 23:58:37.512460   49141 command_runner.go:130] > Linkmode:       dynamic
	I0815 23:58:37.512466   49141 command_runner.go:130] > BuildTags:      
	I0815 23:58:37.512471   49141 command_runner.go:130] >   containers_image_ostree_stub
	I0815 23:58:37.512481   49141 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 23:58:37.512489   49141 command_runner.go:130] >   btrfs_noversion
	I0815 23:58:37.512495   49141 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 23:58:37.512502   49141 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 23:58:37.512509   49141 command_runner.go:130] >   seccomp
	I0815 23:58:37.512516   49141 command_runner.go:130] > LDFlags:          unknown
	I0815 23:58:37.512525   49141 command_runner.go:130] > SeccompEnabled:   true
	I0815 23:58:37.512532   49141 command_runner.go:130] > AppArmorEnabled:  false
	I0815 23:58:37.514401   49141 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 23:58:37.515885   49141 main.go:141] libmachine: (multinode-145108) Calling .GetIP
	I0815 23:58:37.518390   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:37.518699   49141 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:58:37.518720   49141 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:58:37.518899   49141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 23:58:37.523283   49141 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0815 23:58:37.523506   49141 kubeadm.go:883] updating cluster {Name:multinode-145108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-145108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 23:58:37.523654   49141 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:58:37.523700   49141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:58:37.566357   49141 command_runner.go:130] > {
	I0815 23:58:37.566380   49141 command_runner.go:130] >   "images": [
	I0815 23:58:37.566384   49141 command_runner.go:130] >     {
	I0815 23:58:37.566393   49141 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 23:58:37.566397   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566403   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 23:58:37.566407   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566411   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566423   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 23:58:37.566430   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 23:58:37.566434   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566439   49141 command_runner.go:130] >       "size": "87165492",
	I0815 23:58:37.566446   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566450   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566456   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566460   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566464   49141 command_runner.go:130] >     },
	I0815 23:58:37.566467   49141 command_runner.go:130] >     {
	I0815 23:58:37.566474   49141 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 23:58:37.566479   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566485   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 23:58:37.566491   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566495   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566502   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 23:58:37.566512   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 23:58:37.566516   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566520   49141 command_runner.go:130] >       "size": "87190579",
	I0815 23:58:37.566527   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566535   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566539   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566546   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566549   49141 command_runner.go:130] >     },
	I0815 23:58:37.566552   49141 command_runner.go:130] >     {
	I0815 23:58:37.566558   49141 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 23:58:37.566563   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566568   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 23:58:37.566573   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566578   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566585   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 23:58:37.566594   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 23:58:37.566598   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566605   49141 command_runner.go:130] >       "size": "1363676",
	I0815 23:58:37.566608   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566612   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566616   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566620   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566623   49141 command_runner.go:130] >     },
	I0815 23:58:37.566627   49141 command_runner.go:130] >     {
	I0815 23:58:37.566635   49141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 23:58:37.566639   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566646   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 23:58:37.566649   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566653   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566660   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 23:58:37.566675   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 23:58:37.566680   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566684   49141 command_runner.go:130] >       "size": "31470524",
	I0815 23:58:37.566688   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566692   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566696   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566700   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566703   49141 command_runner.go:130] >     },
	I0815 23:58:37.566707   49141 command_runner.go:130] >     {
	I0815 23:58:37.566713   49141 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 23:58:37.566719   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566724   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 23:58:37.566730   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566734   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566741   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 23:58:37.566750   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 23:58:37.566754   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566758   49141 command_runner.go:130] >       "size": "61245718",
	I0815 23:58:37.566765   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.566770   49141 command_runner.go:130] >       "username": "nonroot",
	I0815 23:58:37.566775   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566779   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566785   49141 command_runner.go:130] >     },
	I0815 23:58:37.566788   49141 command_runner.go:130] >     {
	I0815 23:58:37.566794   49141 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 23:58:37.566800   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566806   49141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 23:58:37.566811   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566815   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566824   49141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 23:58:37.566831   49141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 23:58:37.566836   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566840   49141 command_runner.go:130] >       "size": "149009664",
	I0815 23:58:37.566845   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.566850   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.566855   49141 command_runner.go:130] >       },
	I0815 23:58:37.566859   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566865   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566869   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566872   49141 command_runner.go:130] >     },
	I0815 23:58:37.566875   49141 command_runner.go:130] >     {
	I0815 23:58:37.566881   49141 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 23:58:37.566887   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566892   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 23:58:37.566895   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566899   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566906   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 23:58:37.566916   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 23:58:37.566919   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566923   49141 command_runner.go:130] >       "size": "95233506",
	I0815 23:58:37.566926   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.566930   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.566934   49141 command_runner.go:130] >       },
	I0815 23:58:37.566937   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.566941   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.566945   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.566949   49141 command_runner.go:130] >     },
	I0815 23:58:37.566952   49141 command_runner.go:130] >     {
	I0815 23:58:37.566958   49141 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 23:58:37.566964   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.566969   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 23:58:37.566975   49141 command_runner.go:130] >       ],
	I0815 23:58:37.566980   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.566993   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 23:58:37.567002   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 23:58:37.567015   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567019   49141 command_runner.go:130] >       "size": "89437512",
	I0815 23:58:37.567022   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.567026   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.567029   49141 command_runner.go:130] >       },
	I0815 23:58:37.567033   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.567037   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.567040   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.567043   49141 command_runner.go:130] >     },
	I0815 23:58:37.567046   49141 command_runner.go:130] >     {
	I0815 23:58:37.567052   49141 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 23:58:37.567056   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.567060   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 23:58:37.567063   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567067   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.567074   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 23:58:37.567081   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 23:58:37.567085   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567088   49141 command_runner.go:130] >       "size": "92728217",
	I0815 23:58:37.567092   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.567095   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.567099   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.567103   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.567106   49141 command_runner.go:130] >     },
	I0815 23:58:37.567109   49141 command_runner.go:130] >     {
	I0815 23:58:37.567115   49141 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 23:58:37.567118   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.567123   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 23:58:37.567126   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567130   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.567136   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 23:58:37.567143   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 23:58:37.567146   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567151   49141 command_runner.go:130] >       "size": "68420936",
	I0815 23:58:37.567155   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.567158   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.567162   49141 command_runner.go:130] >       },
	I0815 23:58:37.567166   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.567172   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.567175   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.567180   49141 command_runner.go:130] >     },
	I0815 23:58:37.567183   49141 command_runner.go:130] >     {
	I0815 23:58:37.567189   49141 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 23:58:37.567195   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.567199   49141 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 23:58:37.567203   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567207   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.567213   49141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 23:58:37.567223   49141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 23:58:37.567226   49141 command_runner.go:130] >       ],
	I0815 23:58:37.567230   49141 command_runner.go:130] >       "size": "742080",
	I0815 23:58:37.567236   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.567240   49141 command_runner.go:130] >         "value": "65535"
	I0815 23:58:37.567243   49141 command_runner.go:130] >       },
	I0815 23:58:37.567247   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.567251   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.567255   49141 command_runner.go:130] >       "pinned": true
	I0815 23:58:37.567258   49141 command_runner.go:130] >     }
	I0815 23:58:37.567261   49141 command_runner.go:130] >   ]
	I0815 23:58:37.567264   49141 command_runner.go:130] > }
	I0815 23:58:37.568116   49141 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:58:37.568134   49141 crio.go:433] Images already preloaded, skipping extraction
	I0815 23:58:37.568189   49141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 23:58:37.601906   49141 command_runner.go:130] > {
	I0815 23:58:37.601931   49141 command_runner.go:130] >   "images": [
	I0815 23:58:37.601937   49141 command_runner.go:130] >     {
	I0815 23:58:37.601945   49141 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 23:58:37.601955   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.601982   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 23:58:37.601990   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602002   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602016   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 23:58:37.602023   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 23:58:37.602027   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602034   49141 command_runner.go:130] >       "size": "87165492",
	I0815 23:58:37.602039   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602045   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602051   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602058   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602061   49141 command_runner.go:130] >     },
	I0815 23:58:37.602067   49141 command_runner.go:130] >     {
	I0815 23:58:37.602074   49141 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 23:58:37.602080   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602085   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 23:58:37.602091   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602095   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602102   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 23:58:37.602111   49141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 23:58:37.602116   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602120   49141 command_runner.go:130] >       "size": "87190579",
	I0815 23:58:37.602127   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602133   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602140   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602144   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602151   49141 command_runner.go:130] >     },
	I0815 23:58:37.602155   49141 command_runner.go:130] >     {
	I0815 23:58:37.602162   49141 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 23:58:37.602168   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602174   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 23:58:37.602179   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602184   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602193   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 23:58:37.602199   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 23:58:37.602205   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602209   49141 command_runner.go:130] >       "size": "1363676",
	I0815 23:58:37.602215   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602218   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602224   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602228   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602234   49141 command_runner.go:130] >     },
	I0815 23:58:37.602237   49141 command_runner.go:130] >     {
	I0815 23:58:37.602246   49141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 23:58:37.602250   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602255   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 23:58:37.602261   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602266   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602275   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 23:58:37.602288   49141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 23:58:37.602293   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602297   49141 command_runner.go:130] >       "size": "31470524",
	I0815 23:58:37.602303   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602307   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602313   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602317   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602322   49141 command_runner.go:130] >     },
	I0815 23:58:37.602326   49141 command_runner.go:130] >     {
	I0815 23:58:37.602334   49141 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 23:58:37.602337   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602343   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 23:58:37.602347   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602352   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602361   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 23:58:37.602370   49141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 23:58:37.602376   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602381   49141 command_runner.go:130] >       "size": "61245718",
	I0815 23:58:37.602385   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602391   49141 command_runner.go:130] >       "username": "nonroot",
	I0815 23:58:37.602395   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602401   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602405   49141 command_runner.go:130] >     },
	I0815 23:58:37.602410   49141 command_runner.go:130] >     {
	I0815 23:58:37.602416   49141 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 23:58:37.602422   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602427   49141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 23:58:37.602432   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602436   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602446   49141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 23:58:37.602452   49141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 23:58:37.602463   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602467   49141 command_runner.go:130] >       "size": "149009664",
	I0815 23:58:37.602470   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602474   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.602478   49141 command_runner.go:130] >       },
	I0815 23:58:37.602482   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602485   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602489   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602492   49141 command_runner.go:130] >     },
	I0815 23:58:37.602495   49141 command_runner.go:130] >     {
	I0815 23:58:37.602501   49141 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 23:58:37.602505   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602510   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 23:58:37.602513   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602518   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602524   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 23:58:37.602534   49141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 23:58:37.602538   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602548   49141 command_runner.go:130] >       "size": "95233506",
	I0815 23:58:37.602552   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602556   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.602560   49141 command_runner.go:130] >       },
	I0815 23:58:37.602564   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602567   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602571   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602575   49141 command_runner.go:130] >     },
	I0815 23:58:37.602578   49141 command_runner.go:130] >     {
	I0815 23:58:37.602585   49141 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 23:58:37.602589   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602595   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 23:58:37.602598   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602602   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602616   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 23:58:37.602626   49141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 23:58:37.602629   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602633   49141 command_runner.go:130] >       "size": "89437512",
	I0815 23:58:37.602637   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602641   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.602645   49141 command_runner.go:130] >       },
	I0815 23:58:37.602649   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602653   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602657   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602663   49141 command_runner.go:130] >     },
	I0815 23:58:37.602666   49141 command_runner.go:130] >     {
	I0815 23:58:37.602672   49141 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 23:58:37.602676   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602680   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 23:58:37.602686   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602690   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602696   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 23:58:37.602707   49141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 23:58:37.602713   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602716   49141 command_runner.go:130] >       "size": "92728217",
	I0815 23:58:37.602720   49141 command_runner.go:130] >       "uid": null,
	I0815 23:58:37.602725   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602731   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602735   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602739   49141 command_runner.go:130] >     },
	I0815 23:58:37.602744   49141 command_runner.go:130] >     {
	I0815 23:58:37.602750   49141 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 23:58:37.602754   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602761   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 23:58:37.602764   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602768   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602775   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 23:58:37.602783   49141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 23:58:37.602787   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602791   49141 command_runner.go:130] >       "size": "68420936",
	I0815 23:58:37.602794   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602798   49141 command_runner.go:130] >         "value": "0"
	I0815 23:58:37.602804   49141 command_runner.go:130] >       },
	I0815 23:58:37.602808   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602812   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602815   49141 command_runner.go:130] >       "pinned": false
	I0815 23:58:37.602819   49141 command_runner.go:130] >     },
	I0815 23:58:37.602822   49141 command_runner.go:130] >     {
	I0815 23:58:37.602828   49141 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 23:58:37.602833   49141 command_runner.go:130] >       "repoTags": [
	I0815 23:58:37.602838   49141 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 23:58:37.602841   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602845   49141 command_runner.go:130] >       "repoDigests": [
	I0815 23:58:37.602851   49141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 23:58:37.602858   49141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 23:58:37.602862   49141 command_runner.go:130] >       ],
	I0815 23:58:37.602866   49141 command_runner.go:130] >       "size": "742080",
	I0815 23:58:37.602870   49141 command_runner.go:130] >       "uid": {
	I0815 23:58:37.602874   49141 command_runner.go:130] >         "value": "65535"
	I0815 23:58:37.602878   49141 command_runner.go:130] >       },
	I0815 23:58:37.602882   49141 command_runner.go:130] >       "username": "",
	I0815 23:58:37.602885   49141 command_runner.go:130] >       "spec": null,
	I0815 23:58:37.602890   49141 command_runner.go:130] >       "pinned": true
	I0815 23:58:37.602893   49141 command_runner.go:130] >     }
	I0815 23:58:37.602896   49141 command_runner.go:130] >   ]
	I0815 23:58:37.602901   49141 command_runner.go:130] > }
	I0815 23:58:37.603463   49141 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 23:58:37.603476   49141 cache_images.go:84] Images are preloaded, skipping loading
	I0815 23:58:37.603490   49141 kubeadm.go:934] updating node { 192.168.39.117 8443 v1.31.0 crio true true} ...
	I0815 23:58:37.603608   49141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-145108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-145108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 23:58:37.603670   49141 ssh_runner.go:195] Run: crio config
	I0815 23:58:37.644464   49141 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0815 23:58:37.644496   49141 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0815 23:58:37.644507   49141 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0815 23:58:37.644512   49141 command_runner.go:130] > #
	I0815 23:58:37.644522   49141 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0815 23:58:37.644539   49141 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0815 23:58:37.644552   49141 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0815 23:58:37.644564   49141 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0815 23:58:37.644573   49141 command_runner.go:130] > # reload'.
	I0815 23:58:37.644583   49141 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0815 23:58:37.644591   49141 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0815 23:58:37.644599   49141 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0815 23:58:37.644611   49141 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0815 23:58:37.644618   49141 command_runner.go:130] > [crio]
	I0815 23:58:37.644627   49141 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0815 23:58:37.644639   49141 command_runner.go:130] > # containers images, in this directory.
	I0815 23:58:37.644648   49141 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0815 23:58:37.644662   49141 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0815 23:58:37.644668   49141 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0815 23:58:37.644676   49141 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0815 23:58:37.644926   49141 command_runner.go:130] > # imagestore = ""
	I0815 23:58:37.644950   49141 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0815 23:58:37.644962   49141 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0815 23:58:37.645036   49141 command_runner.go:130] > storage_driver = "overlay"
	I0815 23:58:37.645052   49141 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0815 23:58:37.645080   49141 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0815 23:58:37.645089   49141 command_runner.go:130] > storage_option = [
	I0815 23:58:37.645218   49141 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0815 23:58:37.645247   49141 command_runner.go:130] > ]
	I0815 23:58:37.645259   49141 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0815 23:58:37.645271   49141 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0815 23:58:37.645546   49141 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0815 23:58:37.645562   49141 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0815 23:58:37.645572   49141 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0815 23:58:37.645583   49141 command_runner.go:130] > # always happen on a node reboot
	I0815 23:58:37.645928   49141 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0815 23:58:37.645950   49141 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0815 23:58:37.645959   49141 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0815 23:58:37.645967   49141 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0815 23:58:37.646110   49141 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0815 23:58:37.646126   49141 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0815 23:58:37.646138   49141 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0815 23:58:37.646519   49141 command_runner.go:130] > # internal_wipe = true
	I0815 23:58:37.646541   49141 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0815 23:58:37.646551   49141 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0815 23:58:37.646863   49141 command_runner.go:130] > # internal_repair = false
	I0815 23:58:37.646875   49141 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0815 23:58:37.646882   49141 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0815 23:58:37.646891   49141 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0815 23:58:37.647157   49141 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0815 23:58:37.647174   49141 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0815 23:58:37.647180   49141 command_runner.go:130] > [crio.api]
	I0815 23:58:37.647192   49141 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0815 23:58:37.647438   49141 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0815 23:58:37.647452   49141 command_runner.go:130] > # IP address on which the stream server will listen.
	I0815 23:58:37.647794   49141 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0815 23:58:37.647806   49141 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0815 23:58:37.647812   49141 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0815 23:58:37.648055   49141 command_runner.go:130] > # stream_port = "0"
	I0815 23:58:37.648065   49141 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0815 23:58:37.648338   49141 command_runner.go:130] > # stream_enable_tls = false
	I0815 23:58:37.648355   49141 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0815 23:58:37.648554   49141 command_runner.go:130] > # stream_idle_timeout = ""
	I0815 23:58:37.648564   49141 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0815 23:58:37.648570   49141 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0815 23:58:37.648574   49141 command_runner.go:130] > # minutes.
	I0815 23:58:37.648852   49141 command_runner.go:130] > # stream_tls_cert = ""
	I0815 23:58:37.648863   49141 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0815 23:58:37.648869   49141 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0815 23:58:37.649069   49141 command_runner.go:130] > # stream_tls_key = ""
	I0815 23:58:37.649079   49141 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0815 23:58:37.649085   49141 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0815 23:58:37.649107   49141 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0815 23:58:37.649308   49141 command_runner.go:130] > # stream_tls_ca = ""
	I0815 23:58:37.649319   49141 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 23:58:37.649480   49141 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0815 23:58:37.649497   49141 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 23:58:37.649662   49141 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0815 23:58:37.649672   49141 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0815 23:58:37.649678   49141 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0815 23:58:37.649682   49141 command_runner.go:130] > [crio.runtime]
	I0815 23:58:37.649691   49141 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0815 23:58:37.649703   49141 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0815 23:58:37.649714   49141 command_runner.go:130] > # "nofile=1024:2048"
	I0815 23:58:37.649731   49141 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0815 23:58:37.649852   49141 command_runner.go:130] > # default_ulimits = [
	I0815 23:58:37.649932   49141 command_runner.go:130] > # ]
	I0815 23:58:37.649951   49141 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0815 23:58:37.649957   49141 command_runner.go:130] > # no_pivot = false
	I0815 23:58:37.649966   49141 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0815 23:58:37.649975   49141 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0815 23:58:37.649988   49141 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0815 23:58:37.649997   49141 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0815 23:58:37.650007   49141 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0815 23:58:37.650018   49141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 23:58:37.650028   49141 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0815 23:58:37.650035   49141 command_runner.go:130] > # Cgroup setting for conmon
	I0815 23:58:37.650050   49141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0815 23:58:37.650060   49141 command_runner.go:130] > conmon_cgroup = "pod"
	I0815 23:58:37.650069   49141 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0815 23:58:37.650077   49141 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0815 23:58:37.650094   49141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 23:58:37.650103   49141 command_runner.go:130] > conmon_env = [
	I0815 23:58:37.650114   49141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 23:58:37.650123   49141 command_runner.go:130] > ]
	I0815 23:58:37.650132   49141 command_runner.go:130] > # Additional environment variables to set for all the
	I0815 23:58:37.650143   49141 command_runner.go:130] > # containers. These are overridden if set in the
	I0815 23:58:37.650153   49141 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0815 23:58:37.650161   49141 command_runner.go:130] > # default_env = [
	I0815 23:58:37.650166   49141 command_runner.go:130] > # ]
	I0815 23:58:37.650179   49141 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0815 23:58:37.650193   49141 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0815 23:58:37.650203   49141 command_runner.go:130] > # selinux = false
	I0815 23:58:37.650213   49141 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0815 23:58:37.650226   49141 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0815 23:58:37.650238   49141 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0815 23:58:37.650244   49141 command_runner.go:130] > # seccomp_profile = ""
	I0815 23:58:37.650256   49141 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0815 23:58:37.650268   49141 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0815 23:58:37.650281   49141 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0815 23:58:37.650294   49141 command_runner.go:130] > # which might increase security.
	I0815 23:58:37.650304   49141 command_runner.go:130] > # This option is currently deprecated,
	I0815 23:58:37.650314   49141 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0815 23:58:37.650325   49141 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0815 23:58:37.650336   49141 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0815 23:58:37.650349   49141 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0815 23:58:37.650361   49141 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0815 23:58:37.650375   49141 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0815 23:58:37.650387   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.650398   49141 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0815 23:58:37.650409   49141 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0815 23:58:37.650417   49141 command_runner.go:130] > # the cgroup blockio controller.
	I0815 23:58:37.650428   49141 command_runner.go:130] > # blockio_config_file = ""
	I0815 23:58:37.650438   49141 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0815 23:58:37.650448   49141 command_runner.go:130] > # blockio parameters.
	I0815 23:58:37.650455   49141 command_runner.go:130] > # blockio_reload = false
	I0815 23:58:37.650468   49141 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0815 23:58:37.650478   49141 command_runner.go:130] > # irqbalance daemon.
	I0815 23:58:37.650487   49141 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0815 23:58:37.650498   49141 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0815 23:58:37.650513   49141 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0815 23:58:37.650523   49141 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0815 23:58:37.650536   49141 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0815 23:58:37.650548   49141 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0815 23:58:37.650559   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.650568   49141 command_runner.go:130] > # rdt_config_file = ""
	I0815 23:58:37.650580   49141 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0815 23:58:37.650588   49141 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0815 23:58:37.650632   49141 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0815 23:58:37.650643   49141 command_runner.go:130] > # separate_pull_cgroup = ""
	I0815 23:58:37.650653   49141 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0815 23:58:37.650665   49141 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0815 23:58:37.650674   49141 command_runner.go:130] > # will be added.
	I0815 23:58:37.650680   49141 command_runner.go:130] > # default_capabilities = [
	I0815 23:58:37.650689   49141 command_runner.go:130] > # 	"CHOWN",
	I0815 23:58:37.650695   49141 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0815 23:58:37.650706   49141 command_runner.go:130] > # 	"FSETID",
	I0815 23:58:37.650714   49141 command_runner.go:130] > # 	"FOWNER",
	I0815 23:58:37.650719   49141 command_runner.go:130] > # 	"SETGID",
	I0815 23:58:37.650728   49141 command_runner.go:130] > # 	"SETUID",
	I0815 23:58:37.650734   49141 command_runner.go:130] > # 	"SETPCAP",
	I0815 23:58:37.650744   49141 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0815 23:58:37.650750   49141 command_runner.go:130] > # 	"KILL",
	I0815 23:58:37.650758   49141 command_runner.go:130] > # ]
	I0815 23:58:37.650775   49141 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0815 23:58:37.650789   49141 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0815 23:58:37.650802   49141 command_runner.go:130] > # add_inheritable_capabilities = false
	I0815 23:58:37.650814   49141 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0815 23:58:37.650826   49141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 23:58:37.650835   49141 command_runner.go:130] > default_sysctls = [
	I0815 23:58:37.650844   49141 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0815 23:58:37.650853   49141 command_runner.go:130] > ]
	I0815 23:58:37.650860   49141 command_runner.go:130] > # List of devices on the host that a
	I0815 23:58:37.650872   49141 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0815 23:58:37.650882   49141 command_runner.go:130] > # allowed_devices = [
	I0815 23:58:37.650888   49141 command_runner.go:130] > # 	"/dev/fuse",
	I0815 23:58:37.650896   49141 command_runner.go:130] > # ]
	I0815 23:58:37.650904   49141 command_runner.go:130] > # List of additional devices. specified as
	I0815 23:58:37.650918   49141 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0815 23:58:37.650929   49141 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0815 23:58:37.650938   49141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 23:58:37.650949   49141 command_runner.go:130] > # additional_devices = [
	I0815 23:58:37.650954   49141 command_runner.go:130] > # ]
	I0815 23:58:37.650965   49141 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0815 23:58:37.650975   49141 command_runner.go:130] > # cdi_spec_dirs = [
	I0815 23:58:37.650981   49141 command_runner.go:130] > # 	"/etc/cdi",
	I0815 23:58:37.650990   49141 command_runner.go:130] > # 	"/var/run/cdi",
	I0815 23:58:37.650995   49141 command_runner.go:130] > # ]
	I0815 23:58:37.651007   49141 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0815 23:58:37.651020   49141 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0815 23:58:37.651030   49141 command_runner.go:130] > # Defaults to false.
	I0815 23:58:37.651037   49141 command_runner.go:130] > # device_ownership_from_security_context = false
	I0815 23:58:37.651053   49141 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0815 23:58:37.651067   49141 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0815 23:58:37.651073   49141 command_runner.go:130] > # hooks_dir = [
	I0815 23:58:37.651081   49141 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0815 23:58:37.651089   49141 command_runner.go:130] > # ]
	I0815 23:58:37.651098   49141 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0815 23:58:37.651112   49141 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0815 23:58:37.651123   49141 command_runner.go:130] > # its default mounts from the following two files:
	I0815 23:58:37.651128   49141 command_runner.go:130] > #
	I0815 23:58:37.651143   49141 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0815 23:58:37.651160   49141 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0815 23:58:37.651172   49141 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0815 23:58:37.651180   49141 command_runner.go:130] > #
	I0815 23:58:37.651190   49141 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0815 23:58:37.651202   49141 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0815 23:58:37.651214   49141 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0815 23:58:37.651222   49141 command_runner.go:130] > #      only add mounts it finds in this file.
	I0815 23:58:37.651230   49141 command_runner.go:130] > #
	I0815 23:58:37.651237   49141 command_runner.go:130] > # default_mounts_file = ""
	I0815 23:58:37.651248   49141 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0815 23:58:37.651259   49141 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0815 23:58:37.651270   49141 command_runner.go:130] > pids_limit = 1024
	I0815 23:58:37.651280   49141 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0815 23:58:37.651289   49141 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0815 23:58:37.651304   49141 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0815 23:58:37.651315   49141 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0815 23:58:37.651328   49141 command_runner.go:130] > # log_size_max = -1
	I0815 23:58:37.651342   49141 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0815 23:58:37.651352   49141 command_runner.go:130] > # log_to_journald = false
	I0815 23:58:37.651361   49141 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0815 23:58:37.651368   49141 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0815 23:58:37.651380   49141 command_runner.go:130] > # Path to directory for container attach sockets.
	I0815 23:58:37.651391   49141 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0815 23:58:37.651403   49141 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0815 23:58:37.651413   49141 command_runner.go:130] > # bind_mount_prefix = ""
	I0815 23:58:37.651423   49141 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0815 23:58:37.651449   49141 command_runner.go:130] > # read_only = false
	I0815 23:58:37.651462   49141 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0815 23:58:37.651475   49141 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0815 23:58:37.651485   49141 command_runner.go:130] > # live configuration reload.
	I0815 23:58:37.651492   49141 command_runner.go:130] > # log_level = "info"
	I0815 23:58:37.651503   49141 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0815 23:58:37.651512   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.651519   49141 command_runner.go:130] > # log_filter = ""
	I0815 23:58:37.651529   49141 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0815 23:58:37.651543   49141 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0815 23:58:37.651548   49141 command_runner.go:130] > # separated by comma.
	I0815 23:58:37.651561   49141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 23:58:37.651570   49141 command_runner.go:130] > # uid_mappings = ""
	I0815 23:58:37.651580   49141 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0815 23:58:37.651592   49141 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0815 23:58:37.651602   49141 command_runner.go:130] > # separated by comma.
	I0815 23:58:37.651613   49141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 23:58:37.651623   49141 command_runner.go:130] > # gid_mappings = ""
	I0815 23:58:37.651651   49141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0815 23:58:37.651665   49141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 23:58:37.651675   49141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 23:58:37.651689   49141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 23:58:37.651710   49141 command_runner.go:130] > # minimum_mappable_uid = -1
	I0815 23:58:37.651724   49141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0815 23:58:37.651733   49141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 23:58:37.651745   49141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 23:58:37.651757   49141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 23:58:37.651769   49141 command_runner.go:130] > # minimum_mappable_gid = -1
	I0815 23:58:37.651780   49141 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0815 23:58:37.651792   49141 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0815 23:58:37.651805   49141 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0815 23:58:37.651815   49141 command_runner.go:130] > # ctr_stop_timeout = 30
	I0815 23:58:37.651825   49141 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0815 23:58:37.651837   49141 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0815 23:58:37.651847   49141 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0815 23:58:37.651855   49141 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0815 23:58:37.651872   49141 command_runner.go:130] > drop_infra_ctr = false
	I0815 23:58:37.651885   49141 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0815 23:58:37.651896   49141 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0815 23:58:37.651910   49141 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0815 23:58:37.651920   49141 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0815 23:58:37.651930   49141 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0815 23:58:37.651943   49141 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0815 23:58:37.651952   49141 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0815 23:58:37.651965   49141 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0815 23:58:37.651972   49141 command_runner.go:130] > # shared_cpuset = ""
	I0815 23:58:37.651981   49141 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0815 23:58:37.651992   49141 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0815 23:58:37.652002   49141 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0815 23:58:37.652013   49141 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0815 23:58:37.652023   49141 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0815 23:58:37.652032   49141 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0815 23:58:37.652044   49141 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0815 23:58:37.652054   49141 command_runner.go:130] > # enable_criu_support = false
	I0815 23:58:37.652062   49141 command_runner.go:130] > # Enable/disable the generation of the container,
	I0815 23:58:37.652075   49141 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0815 23:58:37.652084   49141 command_runner.go:130] > # enable_pod_events = false
	I0815 23:58:37.652095   49141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 23:58:37.652108   49141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 23:58:37.652119   49141 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0815 23:58:37.652129   49141 command_runner.go:130] > # default_runtime = "runc"
	I0815 23:58:37.652140   49141 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0815 23:58:37.652154   49141 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0815 23:58:37.652171   49141 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0815 23:58:37.652182   49141 command_runner.go:130] > # creation as a file is not desired either.
	I0815 23:58:37.652195   49141 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0815 23:58:37.652205   49141 command_runner.go:130] > # the hostname is being managed dynamically.
	I0815 23:58:37.652216   49141 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0815 23:58:37.652222   49141 command_runner.go:130] > # ]
	I0815 23:58:37.652235   49141 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0815 23:58:37.652247   49141 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0815 23:58:37.652259   49141 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0815 23:58:37.652271   49141 command_runner.go:130] > # Each entry in the table should follow the format:
	I0815 23:58:37.652279   49141 command_runner.go:130] > #
	I0815 23:58:37.652287   49141 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0815 23:58:37.652297   49141 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0815 23:58:37.652318   49141 command_runner.go:130] > # runtime_type = "oci"
	I0815 23:58:37.652328   49141 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0815 23:58:37.652336   49141 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0815 23:58:37.652346   49141 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0815 23:58:37.652353   49141 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0815 23:58:37.652361   49141 command_runner.go:130] > # monitor_env = []
	I0815 23:58:37.652368   49141 command_runner.go:130] > # privileged_without_host_devices = false
	I0815 23:58:37.652377   49141 command_runner.go:130] > # allowed_annotations = []
	I0815 23:58:37.652391   49141 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0815 23:58:37.652400   49141 command_runner.go:130] > # Where:
	I0815 23:58:37.652408   49141 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0815 23:58:37.652421   49141 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0815 23:58:37.652433   49141 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0815 23:58:37.652446   49141 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0815 23:58:37.652455   49141 command_runner.go:130] > #   in $PATH.
	I0815 23:58:37.652464   49141 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0815 23:58:37.652476   49141 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0815 23:58:37.652489   49141 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0815 23:58:37.652497   49141 command_runner.go:130] > #   state.
	I0815 23:58:37.652507   49141 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0815 23:58:37.652520   49141 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0815 23:58:37.652530   49141 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0815 23:58:37.652541   49141 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0815 23:58:37.652552   49141 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0815 23:58:37.652565   49141 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0815 23:58:37.652575   49141 command_runner.go:130] > #   The currently recognized values are:
	I0815 23:58:37.652587   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0815 23:58:37.652602   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0815 23:58:37.652613   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0815 23:58:37.652625   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0815 23:58:37.652639   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0815 23:58:37.652651   49141 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0815 23:58:37.652665   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0815 23:58:37.652678   49141 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0815 23:58:37.652690   49141 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0815 23:58:37.652703   49141 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0815 23:58:37.652713   49141 command_runner.go:130] > #   deprecated option "conmon".
	I0815 23:58:37.652726   49141 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0815 23:58:37.652737   49141 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0815 23:58:37.652749   49141 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0815 23:58:37.652760   49141 command_runner.go:130] > #   should be moved to the container's cgroup
	I0815 23:58:37.652777   49141 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0815 23:58:37.652788   49141 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0815 23:58:37.652800   49141 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0815 23:58:37.652812   49141 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0815 23:58:37.652820   49141 command_runner.go:130] > #
	I0815 23:58:37.652827   49141 command_runner.go:130] > # Using the seccomp notifier feature:
	I0815 23:58:37.652835   49141 command_runner.go:130] > #
	I0815 23:58:37.652844   49141 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0815 23:58:37.652856   49141 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0815 23:58:37.652864   49141 command_runner.go:130] > #
	I0815 23:58:37.652873   49141 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0815 23:58:37.652887   49141 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0815 23:58:37.652895   49141 command_runner.go:130] > #
	I0815 23:58:37.652905   49141 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0815 23:58:37.652913   49141 command_runner.go:130] > # feature.
	I0815 23:58:37.652918   49141 command_runner.go:130] > #
	I0815 23:58:37.652930   49141 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0815 23:58:37.652941   49141 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0815 23:58:37.652950   49141 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0815 23:58:37.652958   49141 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0815 23:58:37.652967   49141 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0815 23:58:37.652972   49141 command_runner.go:130] > #
	I0815 23:58:37.652979   49141 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0815 23:58:37.652987   49141 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0815 23:58:37.652993   49141 command_runner.go:130] > #
	I0815 23:58:37.652999   49141 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0815 23:58:37.653007   49141 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0815 23:58:37.653012   49141 command_runner.go:130] > #
	I0815 23:58:37.653019   49141 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0815 23:58:37.653028   49141 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0815 23:58:37.653034   49141 command_runner.go:130] > # limitation.
	I0815 23:58:37.653040   49141 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0815 23:58:37.653046   49141 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0815 23:58:37.653050   49141 command_runner.go:130] > runtime_type = "oci"
	I0815 23:58:37.653056   49141 command_runner.go:130] > runtime_root = "/run/runc"
	I0815 23:58:37.653060   49141 command_runner.go:130] > runtime_config_path = ""
	I0815 23:58:37.653073   49141 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0815 23:58:37.653079   49141 command_runner.go:130] > monitor_cgroup = "pod"
	I0815 23:58:37.653083   49141 command_runner.go:130] > monitor_exec_cgroup = ""
	I0815 23:58:37.653089   49141 command_runner.go:130] > monitor_env = [
	I0815 23:58:37.653095   49141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 23:58:37.653101   49141 command_runner.go:130] > ]
	I0815 23:58:37.653105   49141 command_runner.go:130] > privileged_without_host_devices = false
	I0815 23:58:37.653113   49141 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0815 23:58:37.653122   49141 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0815 23:58:37.653128   49141 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0815 23:58:37.653137   49141 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0815 23:58:37.653146   49141 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0815 23:58:37.653154   49141 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0815 23:58:37.653164   49141 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0815 23:58:37.653173   49141 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0815 23:58:37.653181   49141 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0815 23:58:37.653191   49141 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0815 23:58:37.653194   49141 command_runner.go:130] > # Example:
	I0815 23:58:37.653199   49141 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0815 23:58:37.653203   49141 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0815 23:58:37.653207   49141 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0815 23:58:37.653212   49141 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0815 23:58:37.653215   49141 command_runner.go:130] > # cpuset = 0
	I0815 23:58:37.653219   49141 command_runner.go:130] > # cpushares = "0-1"
	I0815 23:58:37.653222   49141 command_runner.go:130] > # Where:
	I0815 23:58:37.653226   49141 command_runner.go:130] > # The workload name is workload-type.
	I0815 23:58:37.653232   49141 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0815 23:58:37.653238   49141 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0815 23:58:37.653243   49141 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0815 23:58:37.653251   49141 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0815 23:58:37.653256   49141 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0815 23:58:37.653261   49141 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0815 23:58:37.653267   49141 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0815 23:58:37.653271   49141 command_runner.go:130] > # Default value is set to true
	I0815 23:58:37.653275   49141 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0815 23:58:37.653280   49141 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0815 23:58:37.653284   49141 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0815 23:58:37.653288   49141 command_runner.go:130] > # Default value is set to 'false'
	I0815 23:58:37.653292   49141 command_runner.go:130] > # disable_hostport_mapping = false
	I0815 23:58:37.653298   49141 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0815 23:58:37.653301   49141 command_runner.go:130] > #
	I0815 23:58:37.653306   49141 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0815 23:58:37.653312   49141 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0815 23:58:37.653317   49141 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0815 23:58:37.653332   49141 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0815 23:58:37.653341   49141 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0815 23:58:37.653344   49141 command_runner.go:130] > [crio.image]
	I0815 23:58:37.653350   49141 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0815 23:58:37.653354   49141 command_runner.go:130] > # default_transport = "docker://"
	I0815 23:58:37.653360   49141 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0815 23:58:37.653366   49141 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0815 23:58:37.653369   49141 command_runner.go:130] > # global_auth_file = ""
	I0815 23:58:37.653376   49141 command_runner.go:130] > # The image used to instantiate infra containers.
	I0815 23:58:37.653384   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.653388   49141 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0815 23:58:37.653396   49141 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0815 23:58:37.653402   49141 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0815 23:58:37.653408   49141 command_runner.go:130] > # This option supports live configuration reload.
	I0815 23:58:37.653413   49141 command_runner.go:130] > # pause_image_auth_file = ""
	I0815 23:58:37.653419   49141 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0815 23:58:37.653427   49141 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0815 23:58:37.653435   49141 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0815 23:58:37.653443   49141 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0815 23:58:37.653448   49141 command_runner.go:130] > # pause_command = "/pause"
	I0815 23:58:37.653456   49141 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0815 23:58:37.653463   49141 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0815 23:58:37.653471   49141 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0815 23:58:37.653477   49141 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0815 23:58:37.653485   49141 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0815 23:58:37.653491   49141 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0815 23:58:37.653497   49141 command_runner.go:130] > # pinned_images = [
	I0815 23:58:37.653500   49141 command_runner.go:130] > # ]
	I0815 23:58:37.653508   49141 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0815 23:58:37.653514   49141 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0815 23:58:37.653522   49141 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0815 23:58:37.653530   49141 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0815 23:58:37.653535   49141 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0815 23:58:37.653541   49141 command_runner.go:130] > # signature_policy = ""
	I0815 23:58:37.653546   49141 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0815 23:58:37.653555   49141 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0815 23:58:37.653562   49141 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0815 23:58:37.653568   49141 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0815 23:58:37.653576   49141 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0815 23:58:37.653581   49141 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0815 23:58:37.653589   49141 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0815 23:58:37.653597   49141 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0815 23:58:37.653601   49141 command_runner.go:130] > # changing them here.
	I0815 23:58:37.653607   49141 command_runner.go:130] > # insecure_registries = [
	I0815 23:58:37.653610   49141 command_runner.go:130] > # ]
	I0815 23:58:37.653618   49141 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0815 23:58:37.653625   49141 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0815 23:58:37.653629   49141 command_runner.go:130] > # image_volumes = "mkdir"
	I0815 23:58:37.653636   49141 command_runner.go:130] > # Temporary directory to use for storing big files
	I0815 23:58:37.653640   49141 command_runner.go:130] > # big_files_temporary_dir = ""
	I0815 23:58:37.653648   49141 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0815 23:58:37.653656   49141 command_runner.go:130] > # CNI plugins.
	I0815 23:58:37.653662   49141 command_runner.go:130] > [crio.network]
	I0815 23:58:37.653672   49141 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0815 23:58:37.653683   49141 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0815 23:58:37.653693   49141 command_runner.go:130] > # cni_default_network = ""
	I0815 23:58:37.653705   49141 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0815 23:58:37.653715   49141 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0815 23:58:37.653726   49141 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0815 23:58:37.653736   49141 command_runner.go:130] > # plugin_dirs = [
	I0815 23:58:37.653746   49141 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0815 23:58:37.653753   49141 command_runner.go:130] > # ]
	I0815 23:58:37.653758   49141 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0815 23:58:37.653768   49141 command_runner.go:130] > [crio.metrics]
	I0815 23:58:37.653775   49141 command_runner.go:130] > # Globally enable or disable metrics support.
	I0815 23:58:37.653779   49141 command_runner.go:130] > enable_metrics = true
	I0815 23:58:37.653786   49141 command_runner.go:130] > # Specify enabled metrics collectors.
	I0815 23:58:37.653791   49141 command_runner.go:130] > # Per default all metrics are enabled.
	I0815 23:58:37.653799   49141 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0815 23:58:37.653808   49141 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0815 23:58:37.653815   49141 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0815 23:58:37.653819   49141 command_runner.go:130] > # metrics_collectors = [
	I0815 23:58:37.653825   49141 command_runner.go:130] > # 	"operations",
	I0815 23:58:37.653830   49141 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0815 23:58:37.653836   49141 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0815 23:58:37.653851   49141 command_runner.go:130] > # 	"operations_errors",
	I0815 23:58:37.653862   49141 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0815 23:58:37.653869   49141 command_runner.go:130] > # 	"image_pulls_by_name",
	I0815 23:58:37.653876   49141 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0815 23:58:37.653885   49141 command_runner.go:130] > # 	"image_pulls_failures",
	I0815 23:58:37.653890   49141 command_runner.go:130] > # 	"image_pulls_successes",
	I0815 23:58:37.653897   49141 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0815 23:58:37.653901   49141 command_runner.go:130] > # 	"image_layer_reuse",
	I0815 23:58:37.653908   49141 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0815 23:58:37.653913   49141 command_runner.go:130] > # 	"containers_oom_total",
	I0815 23:58:37.653919   49141 command_runner.go:130] > # 	"containers_oom",
	I0815 23:58:37.653923   49141 command_runner.go:130] > # 	"processes_defunct",
	I0815 23:58:37.653929   49141 command_runner.go:130] > # 	"operations_total",
	I0815 23:58:37.653933   49141 command_runner.go:130] > # 	"operations_latency_seconds",
	I0815 23:58:37.653940   49141 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0815 23:58:37.653945   49141 command_runner.go:130] > # 	"operations_errors_total",
	I0815 23:58:37.653952   49141 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0815 23:58:37.653957   49141 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0815 23:58:37.653966   49141 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0815 23:58:37.653976   49141 command_runner.go:130] > # 	"image_pulls_success_total",
	I0815 23:58:37.653986   49141 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0815 23:58:37.653996   49141 command_runner.go:130] > # 	"containers_oom_count_total",
	I0815 23:58:37.654006   49141 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0815 23:58:37.654012   49141 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0815 23:58:37.654020   49141 command_runner.go:130] > # ]
	I0815 23:58:37.654031   49141 command_runner.go:130] > # The port on which the metrics server will listen.
	I0815 23:58:37.654039   49141 command_runner.go:130] > # metrics_port = 9090
	I0815 23:58:37.654049   49141 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0815 23:58:37.654055   49141 command_runner.go:130] > # metrics_socket = ""
	I0815 23:58:37.654066   49141 command_runner.go:130] > # The certificate for the secure metrics server.
	I0815 23:58:37.654079   49141 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0815 23:58:37.654090   49141 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0815 23:58:37.654100   49141 command_runner.go:130] > # certificate on any modification event.
	I0815 23:58:37.654110   49141 command_runner.go:130] > # metrics_cert = ""
	I0815 23:58:37.654120   49141 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0815 23:58:37.654131   49141 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0815 23:58:37.654139   49141 command_runner.go:130] > # metrics_key = ""
	I0815 23:58:37.654148   49141 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0815 23:58:37.654157   49141 command_runner.go:130] > [crio.tracing]
	I0815 23:58:37.654166   49141 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0815 23:58:37.654175   49141 command_runner.go:130] > # enable_tracing = false
	I0815 23:58:37.654181   49141 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0815 23:58:37.654188   49141 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0815 23:58:37.654194   49141 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0815 23:58:37.654201   49141 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0815 23:58:37.654205   49141 command_runner.go:130] > # CRI-O NRI configuration.
	I0815 23:58:37.654210   49141 command_runner.go:130] > [crio.nri]
	I0815 23:58:37.654214   49141 command_runner.go:130] > # Globally enable or disable NRI.
	I0815 23:58:37.654219   49141 command_runner.go:130] > # enable_nri = false
	I0815 23:58:37.654224   49141 command_runner.go:130] > # NRI socket to listen on.
	I0815 23:58:37.654230   49141 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0815 23:58:37.654234   49141 command_runner.go:130] > # NRI plugin directory to use.
	I0815 23:58:37.654240   49141 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0815 23:58:37.654246   49141 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0815 23:58:37.654251   49141 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0815 23:58:37.654259   49141 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0815 23:58:37.654263   49141 command_runner.go:130] > # nri_disable_connections = false
	I0815 23:58:37.654272   49141 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0815 23:58:37.654282   49141 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0815 23:58:37.654294   49141 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0815 23:58:37.654303   49141 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0815 23:58:37.654315   49141 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0815 23:58:37.654323   49141 command_runner.go:130] > [crio.stats]
	I0815 23:58:37.654334   49141 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0815 23:58:37.654345   49141 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0815 23:58:37.654354   49141 command_runner.go:130] > # stats_collection_period = 0
	I0815 23:58:37.654396   49141 command_runner.go:130] ! time="2024-08-15 23:58:37.613055484Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0815 23:58:37.654413   49141 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0815 23:58:37.654547   49141 cni.go:84] Creating CNI manager for ""
	I0815 23:58:37.654559   49141 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 23:58:37.654569   49141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 23:58:37.654595   49141 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.117 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-145108 NodeName:multinode-145108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 23:58:37.654739   49141 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-145108"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 23:58:37.654817   49141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 23:58:37.665097   49141 command_runner.go:130] > kubeadm
	I0815 23:58:37.665119   49141 command_runner.go:130] > kubectl
	I0815 23:58:37.665124   49141 command_runner.go:130] > kubelet
	I0815 23:58:37.665145   49141 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 23:58:37.665201   49141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 23:58:37.674993   49141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0815 23:58:37.692734   49141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 23:58:37.709859   49141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0815 23:58:37.726803   49141 ssh_runner.go:195] Run: grep 192.168.39.117	control-plane.minikube.internal$ /etc/hosts
	I0815 23:58:37.731013   49141 command_runner.go:130] > 192.168.39.117	control-plane.minikube.internal
	I0815 23:58:37.731111   49141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 23:58:37.865714   49141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 23:58:37.883417   49141 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108 for IP: 192.168.39.117
	I0815 23:58:37.883447   49141 certs.go:194] generating shared ca certs ...
	I0815 23:58:37.883470   49141 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:58:37.883674   49141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0815 23:58:37.883733   49141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0815 23:58:37.883752   49141 certs.go:256] generating profile certs ...
	I0815 23:58:37.883862   49141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/client.key
	I0815 23:58:37.883923   49141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.key.cfce1887
	I0815 23:58:37.883973   49141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.key
	I0815 23:58:37.883984   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 23:58:37.883996   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 23:58:37.884009   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 23:58:37.884019   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 23:58:37.884031   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 23:58:37.884044   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 23:58:37.884066   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 23:58:37.884078   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 23:58:37.884137   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0815 23:58:37.884163   49141 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0815 23:58:37.884175   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 23:58:37.884198   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0815 23:58:37.884220   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0815 23:58:37.884242   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0815 23:58:37.884282   49141 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0815 23:58:37.884309   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> /usr/share/ca-certificates/200782.pem
	I0815 23:58:37.884324   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:37.884338   49141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem -> /usr/share/ca-certificates/20078.pem
	I0815 23:58:37.884945   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 23:58:37.909748   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 23:58:37.934154   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 23:58:37.958235   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 23:58:37.985223   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 23:58:38.010104   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 23:58:38.034503   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 23:58:38.059116   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/multinode-145108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 23:58:38.083402   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0815 23:58:38.107458   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 23:58:38.133204   49141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0815 23:58:38.157964   49141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 23:58:38.175522   49141 ssh_runner.go:195] Run: openssl version
	I0815 23:58:38.181800   49141 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0815 23:58:38.181874   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0815 23:58:38.192838   49141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0815 23:58:38.197430   49141 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:58:38.197586   49141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0815 23:58:38.197648   49141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0815 23:58:38.203608   49141 command_runner.go:130] > 51391683
	I0815 23:58:38.203712   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0815 23:58:38.213249   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0815 23:58:38.224383   49141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0815 23:58:38.228963   49141 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:58:38.229171   49141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0815 23:58:38.229225   49141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0815 23:58:38.234929   49141 command_runner.go:130] > 3ec20f2e
	I0815 23:58:38.235126   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 23:58:38.244980   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 23:58:38.256316   49141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:38.260799   49141 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:38.261004   49141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:38.261063   49141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 23:58:38.266988   49141 command_runner.go:130] > b5213941
	I0815 23:58:38.267048   49141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 23:58:38.276620   49141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:58:38.281371   49141 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 23:58:38.281396   49141 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0815 23:58:38.281404   49141 command_runner.go:130] > Device: 253,1	Inode: 6291478     Links: 1
	I0815 23:58:38.281415   49141 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 23:58:38.281439   49141 command_runner.go:130] > Access: 2024-08-15 23:51:58.469727512 +0000
	I0815 23:58:38.281449   49141 command_runner.go:130] > Modify: 2024-08-15 23:51:58.469727512 +0000
	I0815 23:58:38.281460   49141 command_runner.go:130] > Change: 2024-08-15 23:51:58.469727512 +0000
	I0815 23:58:38.281471   49141 command_runner.go:130] >  Birth: 2024-08-15 23:51:58.469727512 +0000
	I0815 23:58:38.281553   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 23:58:38.287324   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.287526   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 23:58:38.293324   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.293387   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 23:58:38.299799   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.299871   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 23:58:38.305358   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.305536   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 23:58:38.311149   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.311354   49141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 23:58:38.316709   49141 command_runner.go:130] > Certificate will not expire
	I0815 23:58:38.316929   49141 kubeadm.go:392] StartCluster: {Name:multinode-145108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-145108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.241 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:58:38.317033   49141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 23:58:38.317098   49141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 23:58:38.360824   49141 command_runner.go:130] > a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20
	I0815 23:58:38.360851   49141 command_runner.go:130] > 7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e
	I0815 23:58:38.360860   49141 command_runner.go:130] > e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36
	I0815 23:58:38.360870   49141 command_runner.go:130] > 801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8
	I0815 23:58:38.360877   49141 command_runner.go:130] > a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a
	I0815 23:58:38.360886   49141 command_runner.go:130] > e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1
	I0815 23:58:38.360894   49141 command_runner.go:130] > e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781
	I0815 23:58:38.360911   49141 command_runner.go:130] > 3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9
	I0815 23:58:38.360954   49141 cri.go:89] found id: "a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20"
	I0815 23:58:38.360964   49141 cri.go:89] found id: "7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e"
	I0815 23:58:38.360968   49141 cri.go:89] found id: "e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36"
	I0815 23:58:38.360972   49141 cri.go:89] found id: "801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8"
	I0815 23:58:38.360975   49141 cri.go:89] found id: "a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a"
	I0815 23:58:38.360978   49141 cri.go:89] found id: "e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1"
	I0815 23:58:38.360981   49141 cri.go:89] found id: "e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781"
	I0815 23:58:38.360984   49141 cri.go:89] found id: "3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9"
	I0815 23:58:38.360986   49141 cri.go:89] found id: ""
	I0815 23:58:38.361026   49141 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.027019686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766567026992380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ddec1fc-ae2a-412d-b5fa-e0ef34b34511 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.027830288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=796c788a-615e-4fa8-b4b6-d6f8edbefa7b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.027979653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=796c788a-615e-4fa8-b4b6-d6f8edbefa7b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.028359578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f6dc7afc9283b35c7b80bfdb092e4ae3fe3d7e042fe4ed6c90e16ace9a20de,PodSandboxId:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723766359435754485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d,PodSandboxId:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723766325914164330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e,PodSandboxId:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723766325964131703,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5,PodSandboxId:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723766325823196155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05caa33dcdec812c8640535c6d52db6be57c9df197dc03574f9d85c016cdbc53,PodSandboxId:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723766325779392896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880,PodSandboxId:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723766320944894907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0,PodSandboxId:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723766320928146354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a,PodSandboxId:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723766320866828838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565,PodSandboxId:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723766320812977597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a9b72cbbbd778125d5a22fbf4e7a0a190ca5277ee444fe2c9cdf8e2f232a2a,PodSandboxId:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723765999938228931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20,PodSandboxId:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723765946451466934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e,PodSandboxId:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723765946441200334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36,PodSandboxId:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723765934667376993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8,PodSandboxId:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723765932331616310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1,PodSandboxId:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f64e5c859d58c22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723765922068555261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a,PodSandboxId:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723765922096588202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781,PodSandboxId:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723765922003966650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9,PodSandboxId:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723765921984188216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=796c788a-615e-4fa8-b4b6-d6f8edbefa7b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.071732125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82ddf236-a8ad-4fff-9e2a-418f86f199b4 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.071809326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82ddf236-a8ad-4fff-9e2a-418f86f199b4 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.073133888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c3ee3ca-ac33-4b80-a9a2-b2cd48f1feed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.073547567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766567073524269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c3ee3ca-ac33-4b80-a9a2-b2cd48f1feed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.074103178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1963908-0f51-47b4-9b76-123e2f0460e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.074155689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1963908-0f51-47b4-9b76-123e2f0460e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.074472279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f6dc7afc9283b35c7b80bfdb092e4ae3fe3d7e042fe4ed6c90e16ace9a20de,PodSandboxId:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723766359435754485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d,PodSandboxId:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723766325914164330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e,PodSandboxId:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723766325964131703,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5,PodSandboxId:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723766325823196155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05caa33dcdec812c8640535c6d52db6be57c9df197dc03574f9d85c016cdbc53,PodSandboxId:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723766325779392896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880,PodSandboxId:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723766320944894907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0,PodSandboxId:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723766320928146354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a,PodSandboxId:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723766320866828838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565,PodSandboxId:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723766320812977597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a9b72cbbbd778125d5a22fbf4e7a0a190ca5277ee444fe2c9cdf8e2f232a2a,PodSandboxId:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723765999938228931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20,PodSandboxId:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723765946451466934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e,PodSandboxId:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723765946441200334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36,PodSandboxId:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723765934667376993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8,PodSandboxId:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723765932331616310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1,PodSandboxId:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f64e5c859d58c22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723765922068555261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a,PodSandboxId:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723765922096588202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781,PodSandboxId:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723765922003966650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9,PodSandboxId:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723765921984188216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1963908-0f51-47b4-9b76-123e2f0460e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.120576823Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aaaa00ce-112d-4af4-a746-b9d03d07000a name=/runtime.v1.RuntimeService/Version
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.120718288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aaaa00ce-112d-4af4-a746-b9d03d07000a name=/runtime.v1.RuntimeService/Version
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.121893624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af6174be-12bd-4f3d-85d0-78ae3a99f03b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.122316187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766567122293414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af6174be-12bd-4f3d-85d0-78ae3a99f03b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.123141939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f05bcb5-b096-4fe8-8621-3552320bafcf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.123203064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f05bcb5-b096-4fe8-8621-3552320bafcf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.123556318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f6dc7afc9283b35c7b80bfdb092e4ae3fe3d7e042fe4ed6c90e16ace9a20de,PodSandboxId:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723766359435754485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d,PodSandboxId:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723766325914164330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e,PodSandboxId:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723766325964131703,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5,PodSandboxId:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723766325823196155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05caa33dcdec812c8640535c6d52db6be57c9df197dc03574f9d85c016cdbc53,PodSandboxId:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723766325779392896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880,PodSandboxId:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723766320944894907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0,PodSandboxId:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723766320928146354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a,PodSandboxId:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723766320866828838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565,PodSandboxId:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723766320812977597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a9b72cbbbd778125d5a22fbf4e7a0a190ca5277ee444fe2c9cdf8e2f232a2a,PodSandboxId:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723765999938228931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20,PodSandboxId:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723765946451466934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e,PodSandboxId:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723765946441200334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36,PodSandboxId:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723765934667376993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8,PodSandboxId:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723765932331616310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1,PodSandboxId:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f64e5c859d58c22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723765922068555261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a,PodSandboxId:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723765922096588202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781,PodSandboxId:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723765922003966650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9,PodSandboxId:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723765921984188216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f05bcb5-b096-4fe8-8621-3552320bafcf name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.169589675Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dce5cb32-9bce-44ef-a231-09d2bdf69e1e name=/runtime.v1.RuntimeService/Version
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.169726808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dce5cb32-9bce-44ef-a231-09d2bdf69e1e name=/runtime.v1.RuntimeService/Version
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.170734285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f6357cd-86e7-4016-a8ec-d453a40e97d7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.171183010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766567171160955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f6357cd-86e7-4016-a8ec-d453a40e97d7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.171754295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a1bc9b8-9448-4043-8bec-f89ac2a90503 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.171813341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a1bc9b8-9448-4043-8bec-f89ac2a90503 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:02:47 multinode-145108 crio[2749]: time="2024-08-16 00:02:47.172158470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f6dc7afc9283b35c7b80bfdb092e4ae3fe3d7e042fe4ed6c90e16ace9a20de,PodSandboxId:252868a22af5102c4c9fb9fb03664a9404ad51d5ee58cb9bb2986b542b59771d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723766359435754485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d,PodSandboxId:8494ba630f92f5b6b1bb3ca0ec201bc5c1492c1b91d14a632ca739a29091b03b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723766325914164330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e,PodSandboxId:63cf2848b334f2e49bdc9caaa3949d598a94486e14c08345c2c32943a2319c42,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723766325964131703,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5,PodSandboxId:3b1f67d8681e23ed687115ec0575b1ed9112ea9047ba962c1622d4b2b7c6b52c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723766325823196155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05caa33dcdec812c8640535c6d52db6be57c9df197dc03574f9d85c016cdbc53,PodSandboxId:a5caefa2102df20d2a03301f5a6dc4c4448ce0349ecc2a697a48bcb10806c3c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723766325779392896,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880,PodSandboxId:2da5135fbede854225eac41edf259a51a90576d81511f69c9c9514652ef550dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723766320944894907,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0,PodSandboxId:85f52cd98ab9c23733520c50b2008764ad3419bd70c3e4ee64be69e10028c7ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723766320928146354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a,PodSandboxId:47870a76c14a5190be9a138cff93f9a137d4fac3e291134c186a3c4272278819,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723766320866828838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565,PodSandboxId:7ff998226c39d956f5e5b9b27602ce04114d99dec2cc9cb3a93cd50dea784d34,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723766320812977597,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a9b72cbbbd778125d5a22fbf4e7a0a190ca5277ee444fe2c9cdf8e2f232a2a,PodSandboxId:2f77bdc0a065f29f67f0c6b2f30783f2cb081d56a22c2064447954fe82ba24c7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723765999938228931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-h45mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de33a362-6df1-4a49-9c9f-bfbdb3c8183c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20,PodSandboxId:cf2239096f991e94cecf74ca246360b59214637277272b57aaf1f720a14a5146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723765946451466934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4hjxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2521d34-15fc-4304-a3ae-7d9e95df6342,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7252e4597aa95dd546d712385e479729d04ef9611aa7c96a35632aa0fac5a13e,PodSandboxId:9b40c16717dfc0f0801fea14f49cd52360c3aaff620982c76d1d508c9cbc4188,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723765946441200334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9cef8aec-1cd5-4251-aa88-a6dc5b398c12,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36,PodSandboxId:da85041659fd531e3c115fbc4f527f4169a4b6d64ba3b765dd21c679a13270a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723765934667376993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-s5nls,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4cf7ba89-dc92-4ead-a84b-56dca892ab9f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8,PodSandboxId:3d83d3da8eb472e33d62c09bfb3e1fc250e0be253b0845ff59dd418ae7e6301b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723765932331616310,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kcx86,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: ae10003b-b485-4db4-8649-bee882b1bbd0,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1,PodSandboxId:9c338c9803f3349a087c2e9b6b1be71e0478f9321e47f24c2f64e5c859d58c22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723765922068555261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6
abef8cf2f7b219d41ad3fd197a8d9b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a,PodSandboxId:62496da5bd532f6b8ee12509ad1330af1be0f7e9d0b9849df57d9005cd292f47,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723765922096588202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c6a42104da1631cd79aee1b5360fe02,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781,PodSandboxId:44def40a9dae141695784db8e3794eb7838a530ac4ff28952d84a9315b5a87a2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723765922003966650,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8bb1e0b7b05f4430922a4242347e8ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9,PodSandboxId:2613471cfdea6cd86260f1301204b10795418199bbaac3e5f8b32b513b11c903,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723765921984188216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-145108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f5d3acc91f539d7d95f3f990c1c4bf,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a1bc9b8-9448-4043-8bec-f89ac2a90503 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1f6dc7afc928       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   252868a22af51       busybox-7dff88458-h45mw
	8eafd3504e17c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   63cf2848b334f       coredns-6f6b679f8f-4hjxz
	3b90353c92464       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   8494ba630f92f       kindnet-s5nls
	251584fcc4165       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   3b1f67d8681e2       kube-proxy-kcx86
	05caa33dcdec8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   a5caefa2102df       storage-provisioner
	0cd0110364c13       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   2da5135fbede8       kube-controller-manager-multinode-145108
	dcbef7b5ec951       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   85f52cd98ab9c       kube-apiserver-multinode-145108
	d472b1ccc9ebf       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   47870a76c14a5       kube-scheduler-multinode-145108
	95be1d8c42460       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   7ff998226c39d       etcd-multinode-145108
	57a9b72cbbbd7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   2f77bdc0a065f       busybox-7dff88458-h45mw
	a1f497b941980       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   cf2239096f991       coredns-6f6b679f8f-4hjxz
	7252e4597aa95       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   9b40c16717dfc       storage-provisioner
	e278f8b98e2fd       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   da85041659fd5       kindnet-s5nls
	801914e3b1224       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   3d83d3da8eb47       kube-proxy-kcx86
	a340571a03bb5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   62496da5bd532       etcd-multinode-145108
	e6d7a41786e3d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   9c338c9803f33       kube-scheduler-multinode-145108
	e6b50f7c9ea0b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   44def40a9dae1       kube-controller-manager-multinode-145108
	3297def808e61       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   2613471cfdea6       kube-apiserver-multinode-145108
	
	
	==> coredns [8eafd3504e17c9cc04df0d6439564745edc280bdaab5f998bf56ff8ac29ad63e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42333 - 44399 "HINFO IN 2718109776628081537.769365088325569420. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013436286s
	
	
	==> coredns [a1f497b9419806ed7149518f397fc82ff9f3b06a712c64a8629e8337b085fc20] <==
	[INFO] 10.244.0.3:47841 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001745321s
	[INFO] 10.244.0.3:46215 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086963s
	[INFO] 10.244.0.3:46516 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062235s
	[INFO] 10.244.0.3:38734 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001012065s
	[INFO] 10.244.0.3:46251 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043544s
	[INFO] 10.244.0.3:45677 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000038037s
	[INFO] 10.244.0.3:38317 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034451s
	[INFO] 10.244.1.2:40203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148563s
	[INFO] 10.244.1.2:51238 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084161s
	[INFO] 10.244.1.2:51595 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	[INFO] 10.244.1.2:41913 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106578s
	[INFO] 10.244.0.3:59324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000070871s
	[INFO] 10.244.0.3:43911 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000041772s
	[INFO] 10.244.0.3:50977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036786s
	[INFO] 10.244.0.3:42748 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048599s
	[INFO] 10.244.1.2:60117 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000214611s
	[INFO] 10.244.1.2:41113 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116508s
	[INFO] 10.244.1.2:50876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.001595719s
	[INFO] 10.244.1.2:44338 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159898s
	[INFO] 10.244.0.3:53942 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094778s
	[INFO] 10.244.0.3:60634 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109391s
	[INFO] 10.244.0.3:58182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077413s
	[INFO] 10.244.0.3:39166 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126142s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-145108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-145108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=multinode-145108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T23_52_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:52:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-145108
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:02:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 23:58:44 +0000   Thu, 15 Aug 2024 23:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 23:58:44 +0000   Thu, 15 Aug 2024 23:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 23:58:44 +0000   Thu, 15 Aug 2024 23:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 23:58:44 +0000   Thu, 15 Aug 2024 23:52:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    multinode-145108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5243210c7e140159abc9e09b0caa559
	  System UUID:                a5243210-c7e1-4015-9abc-9e09b0caa559
	  Boot ID:                    3739afea-d7f7-47db-94c7-f132f026a571
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h45mw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	  kube-system                 coredns-6f6b679f8f-4hjxz                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-145108                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-s5nls                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-145108             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-145108    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-kcx86                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-145108             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-145108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-145108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-145108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-145108 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-145108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-145108 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-145108 event: Registered Node multinode-145108 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-145108 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-145108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-145108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-145108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node multinode-145108 event: Registered Node multinode-145108 in Controller
	
	
	Name:               multinode-145108-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-145108-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=multinode-145108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T23_59_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 23:59:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-145108-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:00:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 23:59:55 +0000   Fri, 16 Aug 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 23:59:55 +0000   Fri, 16 Aug 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 23:59:55 +0000   Fri, 16 Aug 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 23:59:55 +0000   Fri, 16 Aug 2024 00:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    multinode-145108-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3056b26dac145c1a78441e7444d5ce4
	  System UUID:                d3056b26-dac1-45c1-a784-41e7444d5ce4
	  Boot ID:                    62af9a1c-448b-4ea1-a152-dbe385f49419
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tj29q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 kindnet-5zpnl              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m51s
	  kube-system                 kube-proxy-5t9th           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m18s                  kube-proxy       
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m51s (x2 over 9m51s)  kubelet          Node multinode-145108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m51s (x2 over 9m51s)  kubelet          Node multinode-145108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m51s (x2 over 9m51s)  kubelet          Node multinode-145108-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m32s                  kubelet          Node multinode-145108-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-145108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-145108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-145108-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-145108-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                   node-controller  Node multinode-145108-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058934] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.166644] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.145408] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.271277] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.056455] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.130716] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[Aug15 23:52] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.007148] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.081550] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.220077] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[  +0.025659] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.139634] kauditd_printk_skb: 60 callbacks suppressed
	[Aug15 23:53] kauditd_printk_skb: 12 callbacks suppressed
	[Aug15 23:58] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.148213] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.194202] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.157191] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.321309] systemd-fstab-generator[2734]: Ignoring "noauto" option for root device
	[  +2.343540] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +2.164899] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.082563] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.631555] kauditd_printk_skb: 52 callbacks suppressed
	[ +14.354059] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +0.094025] kauditd_printk_skb: 34 callbacks suppressed
	[Aug15 23:59] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [95be1d8c424606d7ec3d77e04259dd7e3c8c7b9917bd505eed6bc226755b4565] <==
	{"level":"info","ts":"2024-08-15T23:58:41.241791Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","added-peer-id":"d85ef093c7464643","added-peer-peer-urls":["https://192.168.39.117:2380"]}
	{"level":"info","ts":"2024-08-15T23:58:41.241905Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:58:41.243566Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T23:58:41.243828Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:58:41.243897Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:58:41.247952Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d85ef093c7464643","initial-advertise-peer-urls":["https://192.168.39.117:2380"],"listen-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.117:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T23:58:41.248086Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T23:58:41.248298Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-08-15T23:58:41.248308Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-08-15T23:58:42.992786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T23:58:42.992862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T23:58:42.992909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 received MsgPreVoteResp from d85ef093c7464643 at term 2"}
	{"level":"info","ts":"2024-08-15T23:58:42.992928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T23:58:42.992934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 received MsgVoteResp from d85ef093c7464643 at term 3"}
	{"level":"info","ts":"2024-08-15T23:58:42.992943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d85ef093c7464643 became leader at term 3"}
	{"level":"info","ts":"2024-08-15T23:58:42.992951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d85ef093c7464643 elected leader d85ef093c7464643 at term 3"}
	{"level":"info","ts":"2024-08-15T23:58:42.995911Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d85ef093c7464643","local-member-attributes":"{Name:multinode-145108 ClientURLs:[https://192.168.39.117:2379]}","request-path":"/0/members/d85ef093c7464643/attributes","cluster-id":"44831ab0f42e7be7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T23:58:42.995937Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:58:42.997135Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:58:42.997318Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T23:58:42.997364Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T23:58:42.997196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:58:42.998552Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:58:42.999360Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T23:58:43.000332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.117:2379"}
	
	
	==> etcd [a340571a03bb55fe87e2dc0f893e3f41352347e918abd9b4639f610ff1665f9a] <==
	{"level":"info","ts":"2024-08-15T23:52:03.217967Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d85ef093c7464643","local-member-attributes":"{Name:multinode-145108 ClientURLs:[https://192.168.39.117:2379]}","request-path":"/0/members/d85ef093c7464643/attributes","cluster-id":"44831ab0f42e7be7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T23:52:03.218139Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:52:03.218542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T23:52:03.218635Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"44831ab0f42e7be7","local-member-id":"d85ef093c7464643","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:52:03.222076Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:52:03.222120Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T23:52:03.218716Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T23:52:03.222162Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T23:52:03.219281Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:52:03.225195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.117:2379"}
	{"level":"info","ts":"2024-08-15T23:52:03.226497Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T23:52:03.227221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-15T23:53:02.962159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.266673ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T23:53:02.962341Z","caller":"traceutil/trace.go:171","msg":"trace[394562956] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:517; }","duration":"101.535089ms","start":"2024-08-15T23:53:02.860783Z","end":"2024-08-15T23:53:02.962318Z","steps":["trace[394562956] 'range keys from in-memory index tree'  (duration: 101.24518ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T23:53:52.548229Z","caller":"traceutil/trace.go:171","msg":"trace[1536968948] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"217.210835ms","start":"2024-08-15T23:53:52.330982Z","end":"2024-08-15T23:53:52.548193Z","steps":["trace[1536968948] 'process raft request'  (duration: 121.827777ms)","trace[1536968948] 'compare'  (duration: 95.170242ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T23:57:03.319291Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T23:57:03.319414Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-145108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	{"level":"warn","ts":"2024-08-15T23:57:03.320149Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:57:03.320290Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:57:03.401521Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T23:57:03.401793Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T23:57:03.401939Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d85ef093c7464643","current-leader-member-id":"d85ef093c7464643"}
	{"level":"info","ts":"2024-08-15T23:57:03.404510Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-08-15T23:57:03.404726Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2024-08-15T23:57:03.404763Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-145108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	
	
	==> kernel <==
	 00:02:47 up 11 min,  0 users,  load average: 0.17, 0.24, 0.14
	Linux multinode-145108 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3b90353c9246420260ca23b897da287892a9ef639a83b26b3ddd59b0a739052d] <==
	I0816 00:01:46.935617       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0816 00:01:56.944007       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0816 00:01:56.944073       1 main.go:299] handling current node
	I0816 00:01:56.944093       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0816 00:01:56.944100       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0816 00:02:06.943504       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0816 00:02:06.943795       1 main.go:299] handling current node
	I0816 00:02:06.943848       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0816 00:02:06.943868       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0816 00:02:16.944722       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0816 00:02:16.944844       1 main.go:299] handling current node
	I0816 00:02:16.944865       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0816 00:02:16.944871       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0816 00:02:26.935625       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0816 00:02:26.935835       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0816 00:02:26.936121       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0816 00:02:26.936152       1 main.go:299] handling current node
	I0816 00:02:36.939339       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0816 00:02:36.939455       1 main.go:299] handling current node
	I0816 00:02:36.939474       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0816 00:02:36.939480       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0816 00:02:46.935720       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0816 00:02:46.935777       1 main.go:299] handling current node
	I0816 00:02:46.935820       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0816 00:02:46.935832       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e278f8b98e2fd4cb1e392cc12b3798b8292631e0f001fa396f15d8354a586c36] <==
	I0815 23:56:15.726230       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:25.731924       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:56:25.732063       1 main.go:299] handling current node
	I0815 23:56:25.732125       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:56:25.732148       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:56:25.732307       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:56:25.732331       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:35.725757       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:56:35.725836       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:56:35.726068       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:56:35.726105       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:35.726271       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:56:35.726305       1 main.go:299] handling current node
	I0815 23:56:45.725899       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:56:45.726009       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:56:45.726185       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:56:45.726208       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:45.726273       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:56:45.726292       1 main.go:299] handling current node
	I0815 23:56:55.731095       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0815 23:56:55.731143       1 main.go:322] Node multinode-145108-m02 has CIDR [10.244.1.0/24] 
	I0815 23:56:55.731281       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0815 23:56:55.731305       1 main.go:322] Node multinode-145108-m03 has CIDR [10.244.3.0/24] 
	I0815 23:56:55.731356       1 main.go:295] Handling node with IPs: map[192.168.39.117:{}]
	I0815 23:56:55.731377       1 main.go:299] handling current node
	
	
	==> kube-apiserver [3297def808e614852b298eea2013d9effd0b54ff2b39194436d383988494d5d9] <==
	I0815 23:57:03.345471       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 23:57:03.346230       1 controller.go:157] Shutting down quota evaluator
	I0815 23:57:03.346284       1 controller.go:176] quota evaluator worker shutdown
	I0815 23:57:03.347371       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0815 23:57:03.347514       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 23:57:03.348009       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0815 23:57:03.348748       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 23:57:03.349033       1 controller.go:176] quota evaluator worker shutdown
	I0815 23:57:03.349070       1 controller.go:176] quota evaluator worker shutdown
	I0815 23:57:03.349094       1 controller.go:176] quota evaluator worker shutdown
	I0815 23:57:03.349117       1 controller.go:176] quota evaluator worker shutdown
	E0815 23:57:03.350309       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0815 23:57:03.350557       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0815 23:57:03.352549       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.352930       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353023       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353085       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353145       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353208       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353267       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353420       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353732       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353826       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353906       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 23:57:03.353966       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dcbef7b5ec9519728e3ab610e10de1212cd010faa25d91162ad150cda74c50b0] <==
	I0815 23:58:44.394185       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 23:58:44.394410       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 23:58:44.394468       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 23:58:44.395597       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 23:58:44.398596       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 23:58:44.401854       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 23:58:44.403061       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 23:58:44.415875       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0815 23:58:44.419368       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0815 23:58:44.424619       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 23:58:44.424739       1 policy_source.go:224] refreshing policies
	I0815 23:58:44.461232       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 23:58:44.461325       1 aggregator.go:171] initial CRD sync complete...
	I0815 23:58:44.461358       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 23:58:44.461838       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 23:58:44.461978       1 cache.go:39] Caches are synced for autoregister controller
	I0815 23:58:44.510208       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 23:58:45.310297       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 23:58:46.792363       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 23:58:46.942988       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 23:58:46.958722       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 23:58:47.046139       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 23:58:47.053419       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 23:58:47.816628       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 23:58:48.009409       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0cd0110364c13db31eceb7f2b1034c506ea90e5af85dd65ccdc1eda38106c880] <==
	E0816 00:00:02.569339       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-145108-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-145108-m03"
	E0816 00:00:02.569386       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-145108-m03': failed to patch node CIDR: Node \"multinode-145108-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0816 00:00:02.569406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:02.575342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:02.703931       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:02.871961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:03.063092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:12.605013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:20.448429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:20.449155       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0816 00:00:20.460784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:22.840738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:25.215299       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:25.232436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:25.815997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0816 00:00:25.816077       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0816 00:01:07.767039       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-kdng6"
	I0816 00:01:07.805623       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-kdng6"
	I0816 00:01:07.805839       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2tpvm"
	I0816 00:01:07.845591       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2tpvm"
	I0816 00:01:07.861117       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	I0816 00:01:07.883830       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	I0816 00:01:07.909933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.9144ms"
	I0816 00:01:07.918010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="271.274µs"
	I0816 00:01:13.007199       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	
	
	==> kube-controller-manager [e6b50f7c9ea0bd978fdda5c5348171dee3d3bff211cb2b4a5ce4e53d78513781] <==
	I0815 23:54:38.946599       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:38.947289       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0815 23:54:39.979862       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-145108-m03\" does not exist"
	I0815 23:54:39.981807       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0815 23:54:39.994286       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-145108-m03" podCIDRs=["10.244.3.0/24"]
	I0815 23:54:39.994313       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:39.994432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:40.003001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:40.013259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:40.355210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:41.397863       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:50.026160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:57.804294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:54:57.804410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m02"
	I0815 23:54:57.819463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:01.339422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:41.358168       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-145108-m03"
	I0815 23:55:41.358216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	I0815 23:55:41.362153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:41.378289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	I0815 23:55:41.386574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:41.393354       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.808077ms"
	I0815 23:55:41.393599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.287µs"
	I0815 23:55:46.431745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m03"
	I0815 23:55:56.508029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-145108-m02"
	
	
	==> kube-proxy [251584fcc4165029a4177f31e69618f3e227bae489b944885f37b92d34276ed5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:58:46.119797       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:58:46.138055       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.117"]
	E0815 23:58:46.138569       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:58:46.220389       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:58:46.220444       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:58:46.220475       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:58:46.231550       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:58:46.232620       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:58:46.232833       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:58:46.234085       1 config.go:197] "Starting service config controller"
	I0815 23:58:46.234145       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:58:46.234189       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:58:46.234205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:58:46.234758       1 config.go:326] "Starting node config controller"
	I0815 23:58:46.234826       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:58:46.335474       1 shared_informer.go:320] Caches are synced for node config
	I0815 23:58:46.335519       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:58:46.335530       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [801914e3b1224f52858bd96229607cb44869f8d71358dfd026666c1a92ffc8a8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 23:52:13.077594       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 23:52:13.088165       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.117"]
	E0815 23:52:13.088248       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 23:52:13.137782       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 23:52:13.137842       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 23:52:13.137873       1 server_linux.go:169] "Using iptables Proxier"
	I0815 23:52:13.140363       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 23:52:13.140594       1 server.go:483] "Version info" version="v1.31.0"
	I0815 23:52:13.140622       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:52:13.145277       1 config.go:197] "Starting service config controller"
	I0815 23:52:13.145331       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 23:52:13.145363       1 config.go:104] "Starting endpoint slice config controller"
	I0815 23:52:13.145367       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 23:52:13.149566       1 config.go:326] "Starting node config controller"
	I0815 23:52:13.149629       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 23:52:13.249265       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 23:52:13.249346       1 shared_informer.go:320] Caches are synced for service config
	I0815 23:52:13.267065       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d472b1ccc9ebf03ba327aa2c03e458d31259e2ce8d8ef4de7da517999f94a07a] <==
	I0815 23:58:42.243434       1 serving.go:386] Generated self-signed cert in-memory
	I0815 23:58:44.432527       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 23:58:44.432571       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 23:58:44.437148       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 23:58:44.437284       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0815 23:58:44.437336       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0815 23:58:44.437403       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 23:58:44.442358       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 23:58:44.442374       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 23:58:44.442390       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0815 23:58:44.442395       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0815 23:58:44.538093       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0815 23:58:44.543035       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0815 23:58:44.543037       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e6d7a41786e3d0a39440fa3138423dabaf8cde8c725d878ff0a9a34cc8d89bc1] <==
	W0815 23:52:04.669729       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 23:52:04.672929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:04.669807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 23:52:04.672953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0815 23:52:04.669933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.516564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 23:52:05.516619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.659525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 23:52:05.659581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.689555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 23:52:05.689703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.797320       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 23:52:05.797464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.835979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 23:52:05.836058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.856912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 23:52:05.857037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.865744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 23:52:05.865856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:05.905328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 23:52:05.905462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 23:52:06.135124       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 23:52:06.135514       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 23:52:07.831260       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 23:57:03.315616       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 16 00:01:30 multinode-145108 kubelet[2959]: E0816 00:01:30.272516    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766490271595032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:01:40 multinode-145108 kubelet[2959]: E0816 00:01:40.241508    2959 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 00:01:40 multinode-145108 kubelet[2959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 00:01:40 multinode-145108 kubelet[2959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 00:01:40 multinode-145108 kubelet[2959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 00:01:40 multinode-145108 kubelet[2959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 00:01:40 multinode-145108 kubelet[2959]: E0816 00:01:40.274532    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766500274171713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:01:40 multinode-145108 kubelet[2959]: E0816 00:01:40.274577    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766500274171713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:01:50 multinode-145108 kubelet[2959]: E0816 00:01:50.276793    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766510276046398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:01:50 multinode-145108 kubelet[2959]: E0816 00:01:50.277382    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766510276046398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:00 multinode-145108 kubelet[2959]: E0816 00:02:00.280877    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766520279256368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:00 multinode-145108 kubelet[2959]: E0816 00:02:00.280999    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766520279256368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:10 multinode-145108 kubelet[2959]: E0816 00:02:10.286302    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766530285851720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:10 multinode-145108 kubelet[2959]: E0816 00:02:10.286329    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766530285851720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:20 multinode-145108 kubelet[2959]: E0816 00:02:20.288514    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766540287580932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:20 multinode-145108 kubelet[2959]: E0816 00:02:20.289014    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766540287580932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:30 multinode-145108 kubelet[2959]: E0816 00:02:30.291178    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766550290796352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:30 multinode-145108 kubelet[2959]: E0816 00:02:30.291514    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766550290796352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:40 multinode-145108 kubelet[2959]: E0816 00:02:40.241956    2959 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 00:02:40 multinode-145108 kubelet[2959]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 00:02:40 multinode-145108 kubelet[2959]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 00:02:40 multinode-145108 kubelet[2959]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 00:02:40 multinode-145108 kubelet[2959]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 00:02:40 multinode-145108 kubelet[2959]: E0816 00:02:40.297087    2959 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766560294705860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:02:40 multinode-145108 kubelet[2959]: E0816 00:02:40.297399    2959 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723766560294705860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:02:46.737403   51113 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19452-12919/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-145108 -n multinode-145108
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-145108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.44s)

                                                
                                    
x
+
TestPreload (275.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-835027 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0816 00:07:34.233003   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:07:51.160198   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-835027 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.056326511s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-835027 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-835027 image pull gcr.io/k8s-minikube/busybox: (1.10575546s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-835027
E0816 00:09:53.800786   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-835027: exit status 82 (2m0.458285376s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-835027"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-835027 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-16 00:11:06.50856589 +0000 UTC m=+3941.089144865
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-835027 -n test-preload-835027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-835027 -n test-preload-835027: exit status 3 (18.529196398s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:11:25.034190   54074 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host
	E0816 00:11:25.034210   54074 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.165:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-835027" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-835027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-835027
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-835027: (1.132741042s)
--- FAIL: TestPreload (275.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (376.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0816 00:14:36.867414   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:14:53.799328   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m18.521250447s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-165951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-165951" primary control-plane node in "kubernetes-upgrade-165951" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 00:14:11.661008   58019 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:14:11.661110   58019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:14:11.661118   58019 out.go:358] Setting ErrFile to fd 2...
	I0816 00:14:11.661122   58019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:14:11.661300   58019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:14:11.661791   58019 out.go:352] Setting JSON to false
	I0816 00:14:11.662765   58019 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6952,"bootTime":1723760300,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:14:11.662822   58019 start.go:139] virtualization: kvm guest
	I0816 00:14:11.664956   58019 out.go:177] * [kubernetes-upgrade-165951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:14:11.666220   58019 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:14:11.666219   58019 notify.go:220] Checking for updates...
	I0816 00:14:11.668845   58019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:14:11.670557   58019 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:14:11.672033   58019 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:14:11.673417   58019 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:14:11.674648   58019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:14:11.676584   58019 config.go:182] Loaded profile config "NoKubernetes-153553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:14:11.676737   58019 config.go:182] Loaded profile config "offline-crio-116258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:14:11.676851   58019 config.go:182] Loaded profile config "running-upgrade-986094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0816 00:14:11.676956   58019 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:14:11.713149   58019 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 00:14:11.714629   58019 start.go:297] selected driver: kvm2
	I0816 00:14:11.714650   58019 start.go:901] validating driver "kvm2" against <nil>
	I0816 00:14:11.714661   58019 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:14:11.715428   58019 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:14:11.715503   58019 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:14:11.731312   58019 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:14:11.731352   58019 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 00:14:11.731569   58019 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 00:14:11.731633   58019 cni.go:84] Creating CNI manager for ""
	I0816 00:14:11.731648   58019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:14:11.731656   58019 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 00:14:11.731720   58019 start.go:340] cluster config:
	{Name:kubernetes-upgrade-165951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:14:11.731836   58019 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:14:11.733614   58019 out.go:177] * Starting "kubernetes-upgrade-165951" primary control-plane node in "kubernetes-upgrade-165951" cluster
	I0816 00:14:11.734805   58019 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:14:11.734833   58019 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:14:11.734845   58019 cache.go:56] Caching tarball of preloaded images
	I0816 00:14:11.734910   58019 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:14:11.734923   58019 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:14:11.735017   58019 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/config.json ...
	I0816 00:14:11.735040   58019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/config.json: {Name:mk70f018d19b1f8f0e7a7a6fa17505803d8ce545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:14:11.735194   58019 start.go:360] acquireMachinesLock for kubernetes-upgrade-165951: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:15:00.410542   58019 start.go:364] duration metric: took 48.675318537s to acquireMachinesLock for "kubernetes-upgrade-165951"
	I0816 00:15:00.410616   58019 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-165951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:15:00.410745   58019 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 00:15:00.412740   58019 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 00:15:00.412940   58019 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:15:00.412989   58019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:15:00.433348   58019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I0816 00:15:00.433787   58019 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:15:00.434371   58019 main.go:141] libmachine: Using API Version  1
	I0816 00:15:00.434395   58019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:15:00.434762   58019 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:15:00.434980   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetMachineName
	I0816 00:15:00.435128   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:15:00.435340   58019 start.go:159] libmachine.API.Create for "kubernetes-upgrade-165951" (driver="kvm2")
	I0816 00:15:00.435376   58019 client.go:168] LocalClient.Create starting
	I0816 00:15:00.435421   58019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0816 00:15:00.435472   58019 main.go:141] libmachine: Decoding PEM data...
	I0816 00:15:00.435491   58019 main.go:141] libmachine: Parsing certificate...
	I0816 00:15:00.435562   58019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0816 00:15:00.435587   58019 main.go:141] libmachine: Decoding PEM data...
	I0816 00:15:00.435604   58019 main.go:141] libmachine: Parsing certificate...
	I0816 00:15:00.435624   58019 main.go:141] libmachine: Running pre-create checks...
	I0816 00:15:00.435641   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .PreCreateCheck
	I0816 00:15:00.436077   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetConfigRaw
	I0816 00:15:00.436482   58019 main.go:141] libmachine: Creating machine...
	I0816 00:15:00.436496   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .Create
	I0816 00:15:00.436627   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Creating KVM machine...
	I0816 00:15:00.437975   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found existing default KVM network
	I0816 00:15:00.439280   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:00.439124   58727 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:85:f5} reservation:<nil>}
	I0816 00:15:00.440013   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:00.439914   58727 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:15:6b:1d} reservation:<nil>}
	I0816 00:15:00.440867   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:00.440777   58727 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:1b:a8:4c} reservation:<nil>}
	I0816 00:15:00.442342   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:00.442218   58727 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a3950}
	I0816 00:15:00.442371   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | created network xml: 
	I0816 00:15:00.442384   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | <network>
	I0816 00:15:00.442404   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |   <name>mk-kubernetes-upgrade-165951</name>
	I0816 00:15:00.442414   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |   <dns enable='no'/>
	I0816 00:15:00.442421   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |   
	I0816 00:15:00.442431   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0816 00:15:00.442439   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |     <dhcp>
	I0816 00:15:00.442449   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0816 00:15:00.442472   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |     </dhcp>
	I0816 00:15:00.442481   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |   </ip>
	I0816 00:15:00.442487   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG |   
	I0816 00:15:00.442499   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | </network>
	I0816 00:15:00.442509   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | 
	I0816 00:15:00.448293   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | trying to create private KVM network mk-kubernetes-upgrade-165951 192.168.72.0/24...
	I0816 00:15:00.528602   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | private KVM network mk-kubernetes-upgrade-165951 192.168.72.0/24 created
	I0816 00:15:00.528664   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951 ...
	I0816 00:15:00.528680   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 00:15:00.528690   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:00.528561   58727 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:15:00.528758   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 00:15:00.798719   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:00.798590   58727 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa...
	I0816 00:15:00.947818   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:00.947692   58727 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/kubernetes-upgrade-165951.rawdisk...
	I0816 00:15:00.947851   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Writing magic tar header
	I0816 00:15:00.947869   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Writing SSH key tar header
	I0816 00:15:00.948066   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:00.947977   58727 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951 ...
	I0816 00:15:00.948224   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951
	I0816 00:15:00.948253   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951 (perms=drwx------)
	I0816 00:15:00.948266   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0816 00:15:00.948283   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0816 00:15:00.948307   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0816 00:15:00.948321   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0816 00:15:00.948331   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:15:00.948345   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 00:15:00.948362   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 00:15:00.948374   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Creating domain...
	I0816 00:15:00.948389   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0816 00:15:00.948402   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 00:15:00.948414   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Checking permissions on dir: /home/jenkins
	I0816 00:15:00.948434   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Checking permissions on dir: /home
	I0816 00:15:00.948449   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Skipping /home - not owner
	I0816 00:15:00.949586   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) define libvirt domain using xml: 
	I0816 00:15:00.949614   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) <domain type='kvm'>
	I0816 00:15:00.949623   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   <name>kubernetes-upgrade-165951</name>
	I0816 00:15:00.949630   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   <memory unit='MiB'>2200</memory>
	I0816 00:15:00.949640   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   <vcpu>2</vcpu>
	I0816 00:15:00.949650   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   <features>
	I0816 00:15:00.949669   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <acpi/>
	I0816 00:15:00.949679   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <apic/>
	I0816 00:15:00.949688   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <pae/>
	I0816 00:15:00.949700   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     
	I0816 00:15:00.949709   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   </features>
	I0816 00:15:00.949720   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   <cpu mode='host-passthrough'>
	I0816 00:15:00.949731   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   
	I0816 00:15:00.949743   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   </cpu>
	I0816 00:15:00.949753   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   <os>
	I0816 00:15:00.949768   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <type>hvm</type>
	I0816 00:15:00.949780   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <boot dev='cdrom'/>
	I0816 00:15:00.949788   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <boot dev='hd'/>
	I0816 00:15:00.949803   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <bootmenu enable='no'/>
	I0816 00:15:00.949812   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   </os>
	I0816 00:15:00.949818   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   <devices>
	I0816 00:15:00.949827   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <disk type='file' device='cdrom'>
	I0816 00:15:00.949878   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/boot2docker.iso'/>
	I0816 00:15:00.949902   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <target dev='hdc' bus='scsi'/>
	I0816 00:15:00.949914   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <readonly/>
	I0816 00:15:00.949924   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     </disk>
	I0816 00:15:00.949933   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <disk type='file' device='disk'>
	I0816 00:15:00.949946   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 00:15:00.949962   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/kubernetes-upgrade-165951.rawdisk'/>
	I0816 00:15:00.949989   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <target dev='hda' bus='virtio'/>
	I0816 00:15:00.950005   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     </disk>
	I0816 00:15:00.950021   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <interface type='network'>
	I0816 00:15:00.950039   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <source network='mk-kubernetes-upgrade-165951'/>
	I0816 00:15:00.950052   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <model type='virtio'/>
	I0816 00:15:00.950063   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     </interface>
	I0816 00:15:00.950072   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <interface type='network'>
	I0816 00:15:00.950081   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <source network='default'/>
	I0816 00:15:00.950087   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <model type='virtio'/>
	I0816 00:15:00.950098   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     </interface>
	I0816 00:15:00.950110   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <serial type='pty'>
	I0816 00:15:00.950123   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <target port='0'/>
	I0816 00:15:00.950136   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     </serial>
	I0816 00:15:00.950147   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <console type='pty'>
	I0816 00:15:00.950159   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <target type='serial' port='0'/>
	I0816 00:15:00.950180   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     </console>
	I0816 00:15:00.950209   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     <rng model='virtio'>
	I0816 00:15:00.950247   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)       <backend model='random'>/dev/random</backend>
	I0816 00:15:00.950263   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     </rng>
	I0816 00:15:00.950274   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     
	I0816 00:15:00.950283   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)     
	I0816 00:15:00.950293   58019 main.go:141] libmachine: (kubernetes-upgrade-165951)   </devices>
	I0816 00:15:00.950302   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) </domain>
	I0816 00:15:00.950315   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) 
	I0816 00:15:00.954346   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:63:61:2d in network default
	I0816 00:15:00.955063   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Ensuring networks are active...
	I0816 00:15:00.955089   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:00.955774   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Ensuring network default is active
	I0816 00:15:00.956228   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Ensuring network mk-kubernetes-upgrade-165951 is active
	I0816 00:15:00.956763   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Getting domain xml...
	I0816 00:15:00.957628   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Creating domain...
	I0816 00:15:02.225515   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Waiting to get IP...
	I0816 00:15:02.226244   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:02.226703   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:02.226733   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:02.226676   58727 retry.go:31] will retry after 226.129551ms: waiting for machine to come up
	I0816 00:15:02.453964   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:02.454456   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:02.454482   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:02.454405   58727 retry.go:31] will retry after 371.165248ms: waiting for machine to come up
	I0816 00:15:02.826908   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:02.827369   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:02.827397   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:02.827327   58727 retry.go:31] will retry after 447.416026ms: waiting for machine to come up
	I0816 00:15:03.275918   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:03.276395   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:03.276417   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:03.276351   58727 retry.go:31] will retry after 466.34029ms: waiting for machine to come up
	I0816 00:15:03.743718   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:03.744258   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:03.744282   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:03.744215   58727 retry.go:31] will retry after 542.983732ms: waiting for machine to come up
	I0816 00:15:04.288832   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:04.289277   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:04.289309   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:04.289213   58727 retry.go:31] will retry after 888.142288ms: waiting for machine to come up
	I0816 00:15:05.179049   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:05.179492   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:05.179516   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:05.179448   58727 retry.go:31] will retry after 1.117815839s: waiting for machine to come up
	I0816 00:15:06.299153   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:06.299711   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:06.299735   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:06.299657   58727 retry.go:31] will retry after 1.345000857s: waiting for machine to come up
	I0816 00:15:07.646543   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:07.647093   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:07.647124   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:07.647036   58727 retry.go:31] will retry after 1.73034802s: waiting for machine to come up
	I0816 00:15:09.379066   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:09.379529   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:09.379543   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:09.379501   58727 retry.go:31] will retry after 1.992832571s: waiting for machine to come up
	I0816 00:15:11.374481   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:11.374973   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:11.374996   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:11.374923   58727 retry.go:31] will retry after 2.762828942s: waiting for machine to come up
	I0816 00:15:14.139421   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:14.139975   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:14.139999   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:14.139931   58727 retry.go:31] will retry after 2.30573391s: waiting for machine to come up
	I0816 00:15:16.446885   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:16.447291   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:16.447325   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:16.447263   58727 retry.go:31] will retry after 3.043336834s: waiting for machine to come up
	I0816 00:15:19.494606   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:19.495045   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find current IP address of domain kubernetes-upgrade-165951 in network mk-kubernetes-upgrade-165951
	I0816 00:15:19.495068   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | I0816 00:15:19.495003   58727 retry.go:31] will retry after 3.528142345s: waiting for machine to come up
	I0816 00:15:23.025772   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.026357   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Found IP for machine: 192.168.72.157
	I0816 00:15:23.026379   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Reserving static IP address...
	I0816 00:15:23.026408   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has current primary IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.026836   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-165951", mac: "52:54:00:7e:65:e8", ip: "192.168.72.157"} in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.104053   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Getting to WaitForSSH function...
	I0816 00:15:23.104087   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Reserved static IP address: 192.168.72.157
	I0816 00:15:23.104139   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Waiting for SSH to be available...
	I0816 00:15:23.106521   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.106809   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:23.106831   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.106936   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Using SSH client type: external
	I0816 00:15:23.106949   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa (-rw-------)
	I0816 00:15:23.106979   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:15:23.106992   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | About to run SSH command:
	I0816 00:15:23.107001   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | exit 0
	I0816 00:15:23.238008   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | SSH cmd err, output: <nil>: 
	I0816 00:15:23.238278   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) KVM machine creation complete!
	I0816 00:15:23.238612   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetConfigRaw
	I0816 00:15:23.239190   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:15:23.239390   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:15:23.239556   58019 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 00:15:23.239572   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetState
	I0816 00:15:23.240832   58019 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 00:15:23.240847   58019 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 00:15:23.240855   58019 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 00:15:23.240863   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:23.243380   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.243710   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:23.243736   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.243898   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:23.244051   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.244224   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.244346   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:23.244509   58019 main.go:141] libmachine: Using SSH client type: native
	I0816 00:15:23.244744   58019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:15:23.244757   58019 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 00:15:23.357209   58019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:15:23.357232   58019 main.go:141] libmachine: Detecting the provisioner...
	I0816 00:15:23.357243   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:23.360200   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.360570   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:23.360595   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.360821   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:23.361022   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.361207   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.361340   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:23.361551   58019 main.go:141] libmachine: Using SSH client type: native
	I0816 00:15:23.361754   58019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:15:23.361766   58019 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 00:15:23.478676   58019 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 00:15:23.478822   58019 main.go:141] libmachine: found compatible host: buildroot
	I0816 00:15:23.478838   58019 main.go:141] libmachine: Provisioning with buildroot...
	I0816 00:15:23.478848   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetMachineName
	I0816 00:15:23.479088   58019 buildroot.go:166] provisioning hostname "kubernetes-upgrade-165951"
	I0816 00:15:23.479114   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetMachineName
	I0816 00:15:23.479307   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:23.481859   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.482281   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:23.482310   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.482462   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:23.482623   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.482776   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.482909   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:23.483060   58019 main.go:141] libmachine: Using SSH client type: native
	I0816 00:15:23.483246   58019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:15:23.483263   58019 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-165951 && echo "kubernetes-upgrade-165951" | sudo tee /etc/hostname
	I0816 00:15:23.613666   58019 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-165951
	
	I0816 00:15:23.613691   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:23.616467   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.616766   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:23.616794   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.616988   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:23.617174   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.617353   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.617483   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:23.617648   58019 main.go:141] libmachine: Using SSH client type: native
	I0816 00:15:23.617887   58019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:15:23.617912   58019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-165951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-165951/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-165951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:15:23.744218   58019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:15:23.744252   58019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:15:23.744304   58019 buildroot.go:174] setting up certificates
	I0816 00:15:23.744329   58019 provision.go:84] configureAuth start
	I0816 00:15:23.744350   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetMachineName
	I0816 00:15:23.744650   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetIP
	I0816 00:15:23.747642   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.748049   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:23.748093   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.748278   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:23.750913   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.751338   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:23.751379   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.751514   58019 provision.go:143] copyHostCerts
	I0816 00:15:23.751559   58019 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:15:23.751576   58019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:15:23.751624   58019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:15:23.751785   58019 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:15:23.751798   58019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:15:23.752623   58019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:15:23.752751   58019 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:15:23.752760   58019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:15:23.752781   58019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:15:23.752850   58019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-165951 san=[127.0.0.1 192.168.72.157 kubernetes-upgrade-165951 localhost minikube]
	I0816 00:15:23.834352   58019 provision.go:177] copyRemoteCerts
	I0816 00:15:23.834403   58019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:15:23.834426   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:23.837053   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.837396   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:23.837420   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:23.837648   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:23.837863   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:23.838060   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:23.838209   58019 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa Username:docker}
	I0816 00:15:23.931748   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:15:23.962452   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0816 00:15:23.993702   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:15:24.020386   58019 provision.go:87] duration metric: took 276.039581ms to configureAuth
	I0816 00:15:24.020418   58019 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:15:24.020630   58019 config.go:182] Loaded profile config "kubernetes-upgrade-165951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:15:24.020719   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:24.023947   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.024348   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:24.024381   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.024610   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:24.024843   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:24.025049   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:24.025193   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:24.025348   58019 main.go:141] libmachine: Using SSH client type: native
	I0816 00:15:24.025568   58019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:15:24.025591   58019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:15:24.328127   58019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:15:24.328171   58019 main.go:141] libmachine: Checking connection to Docker...
	I0816 00:15:24.328183   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetURL
	I0816 00:15:24.329606   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | Using libvirt version 6000000
	I0816 00:15:24.331846   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.332230   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:24.332263   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.332413   58019 main.go:141] libmachine: Docker is up and running!
	I0816 00:15:24.332431   58019 main.go:141] libmachine: Reticulating splines...
	I0816 00:15:24.332438   58019 client.go:171] duration metric: took 23.897054817s to LocalClient.Create
	I0816 00:15:24.332466   58019 start.go:167] duration metric: took 23.897128648s to libmachine.API.Create "kubernetes-upgrade-165951"
	I0816 00:15:24.332478   58019 start.go:293] postStartSetup for "kubernetes-upgrade-165951" (driver="kvm2")
	I0816 00:15:24.332491   58019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:15:24.332513   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:15:24.332770   58019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:15:24.332802   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:24.335580   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.335988   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:24.336023   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.336178   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:24.336379   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:24.336546   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:24.336689   58019 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa Username:docker}
	I0816 00:15:24.425658   58019 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:15:24.431361   58019 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:15:24.431392   58019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:15:24.431475   58019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:15:24.431572   58019 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:15:24.431719   58019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:15:24.445454   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:15:24.472177   58019 start.go:296] duration metric: took 139.683978ms for postStartSetup
	I0816 00:15:24.472236   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetConfigRaw
	I0816 00:15:24.472823   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetIP
	I0816 00:15:24.475375   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.475723   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:24.475755   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.475998   58019 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/config.json ...
	I0816 00:15:24.476206   58019 start.go:128] duration metric: took 24.065449545s to createHost
	I0816 00:15:24.476228   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:24.478462   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.478883   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:24.478912   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.479092   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:24.479284   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:24.479456   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:24.479641   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:24.479809   58019 main.go:141] libmachine: Using SSH client type: native
	I0816 00:15:24.479978   58019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:15:24.479988   58019 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:15:24.594768   58019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723767324.552153582
	
	I0816 00:15:24.594799   58019 fix.go:216] guest clock: 1723767324.552153582
	I0816 00:15:24.594811   58019 fix.go:229] Guest: 2024-08-16 00:15:24.552153582 +0000 UTC Remote: 2024-08-16 00:15:24.476216804 +0000 UTC m=+72.849965853 (delta=75.936778ms)
	I0816 00:15:24.594842   58019 fix.go:200] guest clock delta is within tolerance: 75.936778ms
	I0816 00:15:24.594850   58019 start.go:83] releasing machines lock for "kubernetes-upgrade-165951", held for 24.184267537s
	I0816 00:15:24.594886   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:15:24.595179   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetIP
	I0816 00:15:24.599050   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.599620   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:24.599678   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.599838   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:15:24.600510   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:15:24.600691   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:15:24.600797   58019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:15:24.600836   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:24.600878   58019 ssh_runner.go:195] Run: cat /version.json
	I0816 00:15:24.600902   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:15:24.603621   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.603769   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.604052   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:24.604081   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.604190   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:24.604232   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:24.604255   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:24.604369   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:24.604508   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:15:24.604520   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:24.604683   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:15:24.604680   58019 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa Username:docker}
	I0816 00:15:24.604850   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:15:24.604994   58019 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa Username:docker}
	I0816 00:15:24.712063   58019 ssh_runner.go:195] Run: systemctl --version
	I0816 00:15:24.718285   58019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:15:24.878247   58019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:15:24.884270   58019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:15:24.884351   58019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:15:24.902883   58019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:15:24.902910   58019 start.go:495] detecting cgroup driver to use...
	I0816 00:15:24.902979   58019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:15:24.921204   58019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:15:24.938011   58019 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:15:24.938089   58019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:15:24.958438   58019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:15:24.979274   58019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:15:25.121286   58019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:15:25.271967   58019 docker.go:233] disabling docker service ...
	I0816 00:15:25.272027   58019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:15:25.288077   58019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:15:25.303843   58019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:15:25.448558   58019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:15:25.575276   58019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:15:25.592127   58019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:15:25.613274   58019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:15:25.613348   58019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:15:25.624652   58019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:15:25.624718   58019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:15:25.635643   58019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:15:25.651590   58019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:15:25.662026   58019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:15:25.672823   58019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:15:25.685264   58019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:15:25.685337   58019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:15:25.703110   58019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:15:25.715860   58019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:15:25.826464   58019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:15:25.973718   58019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:15:25.973809   58019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:15:25.979001   58019 start.go:563] Will wait 60s for crictl version
	I0816 00:15:25.979069   58019 ssh_runner.go:195] Run: which crictl
	I0816 00:15:25.982991   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:15:26.041144   58019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:15:26.041243   58019 ssh_runner.go:195] Run: crio --version
	I0816 00:15:26.073331   58019 ssh_runner.go:195] Run: crio --version
	I0816 00:15:26.110714   58019 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:15:26.111971   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetIP
	I0816 00:15:26.114743   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:26.115135   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:15:16 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:15:26.115166   58019 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:15:26.115362   58019 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:15:26.119876   58019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:15:26.134552   58019 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-165951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:15:26.134728   58019 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:15:26.134794   58019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:15:26.183554   58019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:15:26.183641   58019 ssh_runner.go:195] Run: which lz4
	I0816 00:15:26.188456   58019 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:15:26.193017   58019 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:15:26.193070   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:15:27.943881   58019 crio.go:462] duration metric: took 1.755458183s to copy over tarball
	I0816 00:15:27.943966   58019 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:15:30.535896   58019 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.591893281s)
	I0816 00:15:30.535944   58019 crio.go:469] duration metric: took 2.592010864s to extract the tarball
	I0816 00:15:30.535952   58019 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:15:30.578014   58019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:15:30.637861   58019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:15:30.637893   58019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:15:30.637980   58019 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:15:30.637990   58019 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:15:30.638042   58019 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:15:30.638042   58019 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:15:30.638016   58019 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:15:30.638078   58019 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:15:30.638086   58019 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:15:30.638159   58019 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:15:30.639637   58019 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:15:30.639648   58019 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:15:30.639659   58019 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:15:30.639638   58019 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:15:30.639674   58019 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:15:30.639699   58019 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:15:30.639699   58019 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:15:30.639775   58019 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:15:30.801120   58019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:15:30.802562   58019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:15:30.819315   58019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:15:30.822574   58019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:15:30.828061   58019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:15:30.838407   58019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:15:30.863337   58019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:15:30.878110   58019 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:15:30.878181   58019 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:15:30.878245   58019 ssh_runner.go:195] Run: which crictl
	I0816 00:15:30.944345   58019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:15:30.960326   58019 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:15:30.960370   58019 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:15:30.960419   58019 ssh_runner.go:195] Run: which crictl
	I0816 00:15:30.960455   58019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:15:30.960494   58019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:15:30.960533   58019 ssh_runner.go:195] Run: which crictl
	I0816 00:15:31.037982   58019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:15:31.038000   58019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:15:31.038024   58019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:15:31.038030   58019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:15:31.038033   58019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:15:31.038048   58019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:15:31.038077   58019 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:15:31.038122   58019 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:15:31.038140   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:15:31.038082   58019 ssh_runner.go:195] Run: which crictl
	I0816 00:15:31.038202   58019 ssh_runner.go:195] Run: which crictl
	I0816 00:15:31.038082   58019 ssh_runner.go:195] Run: which crictl
	I0816 00:15:31.038082   58019 ssh_runner.go:195] Run: which crictl
	I0816 00:15:31.173641   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:15:31.173679   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:15:31.173695   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:15:31.173736   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:15:31.173784   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:15:31.173825   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:15:31.173789   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:15:31.344154   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:15:31.344199   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:15:31.344310   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:15:31.344381   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:15:31.344414   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:15:31.344682   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:15:31.398250   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:15:31.551337   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:15:31.551392   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:15:31.551348   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:15:31.551481   58019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:15:31.573170   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:15:31.573213   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:15:31.617646   58019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:15:31.697516   58019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:15:31.697612   58019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:15:31.698740   58019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:15:31.730113   58019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:15:31.730225   58019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:15:31.737008   58019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:15:31.737063   58019 cache_images.go:92] duration metric: took 1.09915384s to LoadCachedImages
	W0816 00:15:31.737126   58019 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0816 00:15:31.737141   58019 kubeadm.go:934] updating node { 192.168.72.157 8443 v1.20.0 crio true true} ...
	I0816 00:15:31.737268   58019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-165951 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:15:31.737356   58019 ssh_runner.go:195] Run: crio config
	I0816 00:15:31.790730   58019 cni.go:84] Creating CNI manager for ""
	I0816 00:15:31.790752   58019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:15:31.790763   58019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:15:31.790790   58019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-165951 NodeName:kubernetes-upgrade-165951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:15:31.790951   58019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-165951"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:15:31.791037   58019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:15:31.804844   58019 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:15:31.804919   58019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:15:31.818673   58019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0816 00:15:31.840446   58019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:15:31.858947   58019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0816 00:15:31.876718   58019 ssh_runner.go:195] Run: grep 192.168.72.157	control-plane.minikube.internal$ /etc/hosts
	I0816 00:15:31.880814   58019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:15:31.893684   58019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:15:32.067432   58019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:15:32.089454   58019 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951 for IP: 192.168.72.157
	I0816 00:15:32.089477   58019 certs.go:194] generating shared ca certs ...
	I0816 00:15:32.089493   58019 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:15:32.089688   58019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:15:32.089747   58019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:15:32.089765   58019 certs.go:256] generating profile certs ...
	I0816 00:15:32.089835   58019 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/client.key
	I0816 00:15:32.089875   58019 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/client.crt with IP's: []
	I0816 00:15:32.236383   58019 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/client.crt ...
	I0816 00:15:32.236422   58019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/client.crt: {Name:mk4a2c8ea81eae99d0293da1424b086c9e4723dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:15:32.236634   58019 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/client.key ...
	I0816 00:15:32.236661   58019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/client.key: {Name:mk3c97c266cc1de850595e1c9014f02c68b5e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:15:32.236817   58019 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.key.869b6558
	I0816 00:15:32.236842   58019 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.crt.869b6558 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.157]
	I0816 00:15:32.565733   58019 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.crt.869b6558 ...
	I0816 00:15:32.565758   58019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.crt.869b6558: {Name:mkba0547b0fb568ffc028abf0ecff6d91e304af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:15:32.565910   58019 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.key.869b6558 ...
	I0816 00:15:32.565924   58019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.key.869b6558: {Name:mk8ad0601c911875606fd528147db48c1a97ed01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:15:32.566002   58019 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.crt.869b6558 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.crt
	I0816 00:15:32.566092   58019 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.key.869b6558 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.key
	I0816 00:15:32.566149   58019 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.key
	I0816 00:15:32.566164   58019 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.crt with IP's: []
	I0816 00:15:32.974634   58019 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.crt ...
	I0816 00:15:32.974662   58019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.crt: {Name:mk19eb4c3ac7dba96f7d8a85e5af39d28b7be39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:15:32.974844   58019 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.key ...
	I0816 00:15:32.974863   58019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.key: {Name:mkfd040a5a378745d7a0dc8b89fef3053ec4844a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:15:32.975082   58019 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:15:32.975126   58019 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:15:32.975141   58019 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:15:32.975176   58019 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:15:32.975207   58019 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:15:32.975239   58019 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:15:32.975292   58019 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:15:32.975846   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:15:33.015132   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:15:33.059151   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:15:33.100405   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:15:33.138132   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 00:15:33.170950   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:15:33.206298   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:15:33.233158   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:15:33.261920   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:15:33.291263   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:15:33.316645   58019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:15:33.347633   58019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:15:33.376739   58019 ssh_runner.go:195] Run: openssl version
	I0816 00:15:33.385387   58019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:15:33.398505   58019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:15:33.403470   58019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:15:33.403531   58019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:15:33.410788   58019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:15:33.422049   58019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:15:33.434992   58019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:15:33.440514   58019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:15:33.440573   58019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:15:33.448473   58019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:15:33.461005   58019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:15:33.475880   58019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:15:33.482146   58019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:15:33.482195   58019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:15:33.490060   58019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:15:33.504759   58019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:15:33.510472   58019 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 00:15:33.510528   58019 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-165951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:15:33.510612   58019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:15:33.510658   58019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:15:33.559564   58019 cri.go:89] found id: ""
	I0816 00:15:33.559649   58019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:15:33.569952   58019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:15:33.579958   58019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:15:33.590359   58019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:15:33.590381   58019 kubeadm.go:157] found existing configuration files:
	
	I0816 00:15:33.590422   58019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:15:33.600838   58019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:15:33.600899   58019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:15:33.612032   58019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:15:33.621893   58019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:15:33.621941   58019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:15:33.631721   58019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:15:33.642473   58019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:15:33.642538   58019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:15:33.657118   58019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:15:33.670839   58019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:15:33.670915   58019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:15:33.682286   58019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:15:34.019109   58019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:17:32.116167   58019 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:17:32.116258   58019 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:17:32.117875   58019 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:17:32.117927   58019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:17:32.118032   58019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:17:32.118145   58019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:17:32.118232   58019 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:17:32.118290   58019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:17:32.119982   58019 out.go:235]   - Generating certificates and keys ...
	I0816 00:17:32.120077   58019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:17:32.120136   58019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:17:32.120207   58019 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 00:17:32.120278   58019 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 00:17:32.120326   58019 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 00:17:32.120394   58019 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 00:17:32.120468   58019 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 00:17:32.120638   58019 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-165951 localhost] and IPs [192.168.72.157 127.0.0.1 ::1]
	I0816 00:17:32.120686   58019 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 00:17:32.120787   58019 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-165951 localhost] and IPs [192.168.72.157 127.0.0.1 ::1]
	I0816 00:17:32.120841   58019 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 00:17:32.120890   58019 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 00:17:32.120928   58019 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 00:17:32.120971   58019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:17:32.121035   58019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:17:32.121079   58019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:17:32.121144   58019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:17:32.121226   58019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:17:32.121369   58019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:17:32.121472   58019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:17:32.121503   58019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:17:32.121565   58019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:17:32.123006   58019 out.go:235]   - Booting up control plane ...
	I0816 00:17:32.123078   58019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:17:32.123166   58019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:17:32.123259   58019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:17:32.123329   58019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:17:32.123478   58019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:17:32.123532   58019 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:17:32.123586   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:17:32.123814   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:17:32.123888   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:17:32.124045   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:17:32.124115   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:17:32.124280   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:17:32.124343   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:17:32.124503   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:17:32.124558   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:17:32.124716   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:17:32.124723   58019 kubeadm.go:310] 
	I0816 00:17:32.124757   58019 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:17:32.124793   58019 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:17:32.124799   58019 kubeadm.go:310] 
	I0816 00:17:32.124825   58019 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:17:32.124852   58019 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:17:32.124951   58019 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:17:32.124962   58019 kubeadm.go:310] 
	I0816 00:17:32.125091   58019 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:17:32.125122   58019 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:17:32.125153   58019 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:17:32.125160   58019 kubeadm.go:310] 
	I0816 00:17:32.125250   58019 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:17:32.125322   58019 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:17:32.125328   58019 kubeadm.go:310] 
	I0816 00:17:32.125412   58019 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:17:32.125490   58019 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:17:32.125559   58019 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:17:32.125615   58019 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:17:32.125635   58019 kubeadm.go:310] 
	W0816 00:17:32.125728   58019 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-165951 localhost] and IPs [192.168.72.157 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-165951 localhost] and IPs [192.168.72.157 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-165951 localhost] and IPs [192.168.72.157 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-165951 localhost] and IPs [192.168.72.157 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:17:32.125769   58019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:17:33.026837   58019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:17:33.041005   58019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:17:33.050777   58019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:17:33.050802   58019 kubeadm.go:157] found existing configuration files:
	
	I0816 00:17:33.050863   58019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:17:33.060183   58019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:17:33.060249   58019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:17:33.069698   58019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:17:33.078877   58019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:17:33.078958   58019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:17:33.088139   58019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:17:33.096995   58019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:17:33.097057   58019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:17:33.106395   58019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:17:33.115389   58019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:17:33.115444   58019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:17:33.124503   58019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:17:33.197721   58019 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:17:33.197797   58019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:17:33.337745   58019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:17:33.337945   58019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:17:33.338124   58019 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:17:33.545158   58019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:17:33.547121   58019 out.go:235]   - Generating certificates and keys ...
	I0816 00:17:33.547225   58019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:17:33.547312   58019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:17:33.547404   58019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:17:33.547491   58019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:17:33.547591   58019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:17:33.547667   58019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:17:33.548034   58019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:17:33.548859   58019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:17:33.549597   58019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:17:33.550370   58019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:17:33.550733   58019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:17:33.550817   58019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:17:33.669408   58019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:17:33.795501   58019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:17:34.103992   58019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:17:34.314578   58019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:17:34.329137   58019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:17:34.330065   58019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:17:34.330119   58019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:17:34.465150   58019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:17:34.467063   58019 out.go:235]   - Booting up control plane ...
	I0816 00:17:34.467165   58019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:17:34.474673   58019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:17:34.475543   58019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:17:34.476281   58019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:17:34.480328   58019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:18:14.482562   58019 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:18:14.483010   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:18:14.483223   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:18:19.483814   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:18:19.484026   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:18:29.484599   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:18:29.485597   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:18:49.488533   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:18:49.488748   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:19:29.488760   58019 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:19:29.489523   58019 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:19:29.489552   58019 kubeadm.go:310] 
	I0816 00:19:29.489602   58019 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:19:29.489637   58019 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:19:29.489643   58019 kubeadm.go:310] 
	I0816 00:19:29.489690   58019 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:19:29.489736   58019 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:19:29.489916   58019 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:19:29.489940   58019 kubeadm.go:310] 
	I0816 00:19:29.490072   58019 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:19:29.490120   58019 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:19:29.490163   58019 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:19:29.490176   58019 kubeadm.go:310] 
	I0816 00:19:29.490317   58019 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:19:29.490433   58019 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:19:29.490443   58019 kubeadm.go:310] 
	I0816 00:19:29.490591   58019 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:19:29.490706   58019 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:19:29.490810   58019 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:19:29.490922   58019 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:19:29.490942   58019 kubeadm.go:310] 
	I0816 00:19:29.491537   58019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:19:29.491749   58019 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:19:29.491868   58019 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:19:29.491914   58019 kubeadm.go:394] duration metric: took 3m55.981390551s to StartCluster
	I0816 00:19:29.491966   58019 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:19:29.492032   58019 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:19:29.534991   58019 cri.go:89] found id: ""
	I0816 00:19:29.535027   58019 logs.go:276] 0 containers: []
	W0816 00:19:29.535035   58019 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:19:29.535042   58019 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:19:29.535099   58019 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:19:29.569663   58019 cri.go:89] found id: ""
	I0816 00:19:29.569691   58019 logs.go:276] 0 containers: []
	W0816 00:19:29.569699   58019 logs.go:278] No container was found matching "etcd"
	I0816 00:19:29.569705   58019 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:19:29.569757   58019 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:19:29.605538   58019 cri.go:89] found id: ""
	I0816 00:19:29.605561   58019 logs.go:276] 0 containers: []
	W0816 00:19:29.605569   58019 logs.go:278] No container was found matching "coredns"
	I0816 00:19:29.605574   58019 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:19:29.605636   58019 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:19:29.641548   58019 cri.go:89] found id: ""
	I0816 00:19:29.641577   58019 logs.go:276] 0 containers: []
	W0816 00:19:29.641584   58019 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:19:29.641590   58019 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:19:29.641638   58019 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:19:29.677888   58019 cri.go:89] found id: ""
	I0816 00:19:29.677915   58019 logs.go:276] 0 containers: []
	W0816 00:19:29.677923   58019 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:19:29.677929   58019 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:19:29.677986   58019 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:19:29.714100   58019 cri.go:89] found id: ""
	I0816 00:19:29.714127   58019 logs.go:276] 0 containers: []
	W0816 00:19:29.714135   58019 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:19:29.714141   58019 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:19:29.714187   58019 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:19:29.753851   58019 cri.go:89] found id: ""
	I0816 00:19:29.753875   58019 logs.go:276] 0 containers: []
	W0816 00:19:29.753883   58019 logs.go:278] No container was found matching "kindnet"
	I0816 00:19:29.753892   58019 logs.go:123] Gathering logs for kubelet ...
	I0816 00:19:29.753902   58019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:19:29.807507   58019 logs.go:123] Gathering logs for dmesg ...
	I0816 00:19:29.807548   58019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:19:29.823592   58019 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:19:29.823635   58019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:19:29.961294   58019 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:19:29.961315   58019 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:19:29.961330   58019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:19:30.081642   58019 logs.go:123] Gathering logs for container status ...
	I0816 00:19:30.081679   58019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 00:19:30.130493   58019 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:19:30.130569   58019 out.go:270] * 
	* 
	W0816 00:19:30.130645   58019 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:19:30.130666   58019 out.go:270] * 
	* 
	W0816 00:19:30.131907   58019 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:19:30.135315   58019 out.go:201] 
	W0816 00:19:30.136490   58019 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:19:30.136550   58019 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:19:30.136584   58019 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:19:30.137999   58019 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-165951
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-165951: (1.409948799s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-165951 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-165951 status --format={{.Host}}: exit status 7 (82.606346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0816 00:19:53.800092   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.185470143s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-165951 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.593396ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-165951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-165951
	    minikube start -p kubernetes-upgrade-165951 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1659512 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-165951 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-165951 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (14.166923755s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-16 00:20:24.195499283 +0000 UTC m=+4498.776078247
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-165951 -n kubernetes-upgrade-165951
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-165951 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-165951 logs -n 25: (1.748901386s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-153553                | NoKubernetes-153553       | jenkins | v1.33.1 | 16 Aug 24 00:15 UTC | 16 Aug 24 00:16 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-329005 stop           | minikube                  | jenkins | v1.26.0 | 16 Aug 24 00:16 UTC | 16 Aug 24 00:16 UTC |
	| ssh     | -p NoKubernetes-153553 sudo           | NoKubernetes-153553       | jenkins | v1.33.1 | 16 Aug 24 00:16 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-329005             | stopped-upgrade-329005    | jenkins | v1.33.1 | 16 Aug 24 00:16 UTC | 16 Aug 24 00:17 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-153553                | NoKubernetes-153553       | jenkins | v1.33.1 | 16 Aug 24 00:16 UTC | 16 Aug 24 00:16 UTC |
	| start   | -p NoKubernetes-153553                | NoKubernetes-153553       | jenkins | v1.33.1 | 16 Aug 24 00:16 UTC | 16 Aug 24 00:17 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-986094             | running-upgrade-986094    | jenkins | v1.33.1 | 16 Aug 24 00:16 UTC | 16 Aug 24 00:16 UTC |
	| start   | -p cert-expiration-057647             | cert-expiration-057647    | jenkins | v1.33.1 | 16 Aug 24 00:16 UTC | 16 Aug 24 00:17 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-329005             | stopped-upgrade-329005    | jenkins | v1.33.1 | 16 Aug 24 00:17 UTC | 16 Aug 24 00:17 UTC |
	| start   | -p force-systemd-flag-771420          | force-systemd-flag-771420 | jenkins | v1.33.1 | 16 Aug 24 00:17 UTC | 16 Aug 24 00:18 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-153553 sudo           | NoKubernetes-153553       | jenkins | v1.33.1 | 16 Aug 24 00:17 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-153553                | NoKubernetes-153553       | jenkins | v1.33.1 | 16 Aug 24 00:17 UTC | 16 Aug 24 00:17 UTC |
	| start   | -p cert-options-798942                | cert-options-798942       | jenkins | v1.33.1 | 16 Aug 24 00:17 UTC | 16 Aug 24 00:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-771420 ssh cat     | force-systemd-flag-771420 | jenkins | v1.33.1 | 16 Aug 24 00:18 UTC | 16 Aug 24 00:18 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-771420          | force-systemd-flag-771420 | jenkins | v1.33.1 | 16 Aug 24 00:18 UTC | 16 Aug 24 00:18 UTC |
	| start   | -p pause-937923 --memory=2048         | pause-937923              | jenkins | v1.33.1 | 16 Aug 24 00:18 UTC | 16 Aug 24 00:19 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-798942 ssh               | cert-options-798942       | jenkins | v1.33.1 | 16 Aug 24 00:18 UTC | 16 Aug 24 00:18 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-798942 -- sudo        | cert-options-798942       | jenkins | v1.33.1 | 16 Aug 24 00:18 UTC | 16 Aug 24 00:18 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-798942                | cert-options-798942       | jenkins | v1.33.1 | 16 Aug 24 00:18 UTC | 16 Aug 24 00:18 UTC |
	| start   | -p auto-697641 --memory=3072          | auto-697641               | jenkins | v1.33.1 | 16 Aug 24 00:18 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-165951          | kubernetes-upgrade-165951 | jenkins | v1.33.1 | 16 Aug 24 00:19 UTC | 16 Aug 24 00:19 UTC |
	| start   | -p kubernetes-upgrade-165951          | kubernetes-upgrade-165951 | jenkins | v1.33.1 | 16 Aug 24 00:19 UTC | 16 Aug 24 00:20 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-937923                       | pause-937923              | jenkins | v1.33.1 | 16 Aug 24 00:19 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-165951          | kubernetes-upgrade-165951 | jenkins | v1.33.1 | 16 Aug 24 00:20 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-165951          | kubernetes-upgrade-165951 | jenkins | v1.33.1 | 16 Aug 24 00:20 UTC | 16 Aug 24 00:20 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:20:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:20:10.069603   63623 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:20:10.070007   63623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:20:10.070041   63623 out.go:358] Setting ErrFile to fd 2...
	I0816 00:20:10.070060   63623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:20:10.070512   63623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:20:10.071109   63623 out.go:352] Setting JSON to false
	I0816 00:20:10.072418   63623 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7310,"bootTime":1723760300,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:20:10.072480   63623 start.go:139] virtualization: kvm guest
	I0816 00:20:10.074656   63623 out.go:177] * [kubernetes-upgrade-165951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:20:10.075848   63623 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:20:10.075854   63623 notify.go:220] Checking for updates...
	I0816 00:20:10.077996   63623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:20:10.079279   63623 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:20:10.080472   63623 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:20:10.081780   63623 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:20:10.083039   63623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:20:10.084928   63623 config.go:182] Loaded profile config "kubernetes-upgrade-165951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:20:10.085375   63623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:20:10.085423   63623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:20:10.101126   63623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0816 00:20:10.101497   63623 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:20:10.102006   63623 main.go:141] libmachine: Using API Version  1
	I0816 00:20:10.102027   63623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:20:10.102333   63623 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:20:10.102489   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:10.102707   63623 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:20:10.103074   63623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:20:10.103112   63623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:20:10.118585   63623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38949
	I0816 00:20:10.119041   63623 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:20:10.119559   63623 main.go:141] libmachine: Using API Version  1
	I0816 00:20:10.119582   63623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:20:10.119874   63623 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:20:10.120150   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:10.157286   63623 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:20:10.158568   63623 start.go:297] selected driver: kvm2
	I0816 00:20:10.158591   63623 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-165951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:20:10.158706   63623 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:20:10.159460   63623 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:20:10.159557   63623 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:20:10.175446   63623 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:20:10.175902   63623 cni.go:84] Creating CNI manager for ""
	I0816 00:20:10.175923   63623 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:20:10.175980   63623 start.go:340] cluster config:
	{Name:kubernetes-upgrade-165951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-165951 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:20:10.176130   63623 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:20:10.178133   63623 out.go:177] * Starting "kubernetes-upgrade-165951" primary control-plane node in "kubernetes-upgrade-165951" cluster
	I0816 00:20:10.179283   63623 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:20:10.179350   63623 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:20:10.179367   63623 cache.go:56] Caching tarball of preloaded images
	I0816 00:20:10.179451   63623 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:20:10.179466   63623 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 00:20:10.179580   63623 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/config.json ...
	I0816 00:20:10.179812   63623 start.go:360] acquireMachinesLock for kubernetes-upgrade-165951: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:20:10.179861   63623 start.go:364] duration metric: took 27.261µs to acquireMachinesLock for "kubernetes-upgrade-165951"
	I0816 00:20:10.179878   63623 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:20:10.179888   63623 fix.go:54] fixHost starting: 
	I0816 00:20:10.180229   63623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:20:10.180269   63623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:20:10.196010   63623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37979
	I0816 00:20:10.196542   63623 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:20:10.197075   63623 main.go:141] libmachine: Using API Version  1
	I0816 00:20:10.197101   63623 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:20:10.197431   63623 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:20:10.197621   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:10.197863   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetState
	I0816 00:20:10.199468   63623 fix.go:112] recreateIfNeeded on kubernetes-upgrade-165951: state=Running err=<nil>
	W0816 00:20:10.199485   63623 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:20:10.201179   63623 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-165951" VM ...
	I0816 00:20:10.350814   63437 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.234677658s)
	I0816 00:20:10.350841   63437 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:20:10.350899   63437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:20:10.356895   63437 start.go:563] Will wait 60s for crictl version
	I0816 00:20:10.356960   63437 ssh_runner.go:195] Run: which crictl
	I0816 00:20:10.361071   63437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:20:10.402483   63437 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:20:10.402569   63437 ssh_runner.go:195] Run: crio --version
	I0816 00:20:10.433257   63437 ssh_runner.go:195] Run: crio --version
	I0816 00:20:10.468810   63437 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:20:08.999037   62772 pod_ready.go:103] pod "coredns-6f6b679f8f-wmmtl" in "kube-system" namespace has status "Ready":"False"
	I0816 00:20:11.000048   62772 pod_ready.go:103] pod "coredns-6f6b679f8f-wmmtl" in "kube-system" namespace has status "Ready":"False"
	I0816 00:20:13.001922   62772 pod_ready.go:103] pod "coredns-6f6b679f8f-wmmtl" in "kube-system" namespace has status "Ready":"False"
	I0816 00:20:10.202357   63623 machine.go:93] provisionDockerMachine start ...
	I0816 00:20:10.202384   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:10.202613   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:10.204953   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.205411   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:10.205443   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.205550   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:10.205709   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.205901   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.206046   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:10.206233   63623 main.go:141] libmachine: Using SSH client type: native
	I0816 00:20:10.206479   63623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:20:10.206494   63623 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:20:10.332334   63623 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-165951
	
	I0816 00:20:10.332370   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetMachineName
	I0816 00:20:10.332604   63623 buildroot.go:166] provisioning hostname "kubernetes-upgrade-165951"
	I0816 00:20:10.332633   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetMachineName
	I0816 00:20:10.332834   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:10.335626   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.336120   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:10.336150   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.336324   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:10.336543   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.336748   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.336908   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:10.337102   63623 main.go:141] libmachine: Using SSH client type: native
	I0816 00:20:10.337324   63623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:20:10.337338   63623 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-165951 && echo "kubernetes-upgrade-165951" | sudo tee /etc/hostname
	I0816 00:20:10.480855   63623 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-165951
	
	I0816 00:20:10.480876   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:10.483459   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.483809   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:10.483858   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.483979   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:10.484177   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.484343   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.484494   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:10.484712   63623 main.go:141] libmachine: Using SSH client type: native
	I0816 00:20:10.484948   63623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:20:10.484973   63623 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-165951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-165951/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-165951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:20:10.613496   63623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:20:10.613528   63623 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:20:10.613571   63623 buildroot.go:174] setting up certificates
	I0816 00:20:10.613582   63623 provision.go:84] configureAuth start
	I0816 00:20:10.613597   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetMachineName
	I0816 00:20:10.613922   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetIP
	I0816 00:20:10.616939   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.617372   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:10.617401   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.617553   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:10.620212   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.620740   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:10.620813   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.621009   63623 provision.go:143] copyHostCerts
	I0816 00:20:10.621067   63623 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:20:10.621088   63623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:20:10.621161   63623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:20:10.621291   63623 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:20:10.621309   63623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:20:10.621339   63623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:20:10.621428   63623 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:20:10.621442   63623 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:20:10.621470   63623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:20:10.621600   63623 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-165951 san=[127.0.0.1 192.168.72.157 kubernetes-upgrade-165951 localhost minikube]
	I0816 00:20:10.715173   63623 provision.go:177] copyRemoteCerts
	I0816 00:20:10.715237   63623 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:20:10.715264   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:10.718291   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.718756   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:10.718795   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.719087   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:10.719310   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.719512   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:10.719689   63623 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa Username:docker}
	I0816 00:20:10.813893   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:20:10.858851   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0816 00:20:10.904192   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:20:10.960463   63623 provision.go:87] duration metric: took 346.867148ms to configureAuth
	I0816 00:20:10.960495   63623 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:20:10.960710   63623 config.go:182] Loaded profile config "kubernetes-upgrade-165951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:20:10.960805   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:10.963839   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.964358   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:10.964389   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:10.964598   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:10.964822   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.965037   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:10.965223   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:10.965603   63623 main.go:141] libmachine: Using SSH client type: native
	I0816 00:20:10.965866   63623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:20:10.965888   63623 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:20:11.926557   63623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:20:11.926586   63623 machine.go:96] duration metric: took 1.72421021s to provisionDockerMachine
	I0816 00:20:11.926599   63623 start.go:293] postStartSetup for "kubernetes-upgrade-165951" (driver="kvm2")
	I0816 00:20:11.926635   63623 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:20:11.926664   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:11.926995   63623 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:20:11.927027   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:11.929623   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:11.929946   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:11.929967   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:11.930068   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:11.930200   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:11.930312   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:11.930469   63623 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa Username:docker}
	I0816 00:20:12.043064   63623 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:20:12.050910   63623 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:20:12.050937   63623 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:20:12.051009   63623 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:20:12.051117   63623 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:20:12.051245   63623 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:20:12.087860   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:20:12.168795   63623 start.go:296] duration metric: took 242.180728ms for postStartSetup
	I0816 00:20:12.168844   63623 fix.go:56] duration metric: took 1.988954702s for fixHost
	I0816 00:20:12.168870   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:12.171755   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:12.172170   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:12.172199   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:12.172401   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:12.172636   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:12.172795   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:12.172967   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:12.173139   63623 main.go:141] libmachine: Using SSH client type: native
	I0816 00:20:12.173318   63623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0816 00:20:12.173328   63623 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:20:12.328228   63623 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723767612.318231940
	
	I0816 00:20:12.328251   63623 fix.go:216] guest clock: 1723767612.318231940
	I0816 00:20:12.328260   63623 fix.go:229] Guest: 2024-08-16 00:20:12.31823194 +0000 UTC Remote: 2024-08-16 00:20:12.16884986 +0000 UTC m=+2.136733632 (delta=149.38208ms)
	I0816 00:20:12.328280   63623 fix.go:200] guest clock delta is within tolerance: 149.38208ms
	I0816 00:20:12.328286   63623 start.go:83] releasing machines lock for "kubernetes-upgrade-165951", held for 2.148416587s
	I0816 00:20:12.328306   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:12.328625   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetIP
	I0816 00:20:12.331358   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:12.331746   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:12.331775   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:12.331908   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:12.332409   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:12.332596   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .DriverName
	I0816 00:20:12.332702   63623 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:20:12.332768   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:12.332804   63623 ssh_runner.go:195] Run: cat /version.json
	I0816 00:20:12.332827   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHHostname
	I0816 00:20:12.335605   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:12.335812   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:12.336043   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:12.336089   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:12.336240   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:12.336356   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:12.336413   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:12.336422   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:12.336529   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHPort
	I0816 00:20:12.336594   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:12.336686   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHKeyPath
	I0816 00:20:12.336835   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetSSHUsername
	I0816 00:20:12.336842   63623 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa Username:docker}
	I0816 00:20:12.336973   63623 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/kubernetes-upgrade-165951/id_rsa Username:docker}
	I0816 00:20:12.488630   63623 ssh_runner.go:195] Run: systemctl --version
	I0816 00:20:12.532960   63623 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:20:12.725562   63623 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:20:12.737697   63623 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:20:12.737784   63623 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:20:12.753402   63623 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 00:20:12.753427   63623 start.go:495] detecting cgroup driver to use...
	I0816 00:20:12.753512   63623 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:20:12.777726   63623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:20:12.798778   63623 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:20:12.798944   63623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:20:12.816964   63623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:20:12.839938   63623 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:20:13.059039   63623 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:20:13.224357   63623 docker.go:233] disabling docker service ...
	I0816 00:20:13.224417   63623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:20:13.245288   63623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:20:13.263111   63623 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:20:13.450378   63623 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:20:13.642334   63623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:20:13.658814   63623 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:20:13.686093   63623 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:20:13.686175   63623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:20:13.698221   63623 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:20:13.698288   63623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:20:13.711999   63623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:20:13.725311   63623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:20:13.737723   63623 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:20:13.752705   63623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:20:13.768404   63623 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:20:13.786721   63623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:20:13.807440   63623 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:20:13.821609   63623 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:20:13.835419   63623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:20:14.021570   63623 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:20:14.434799   63623 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:20:14.434880   63623 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:20:14.440352   63623 start.go:563] Will wait 60s for crictl version
	I0816 00:20:14.440403   63623 ssh_runner.go:195] Run: which crictl
	I0816 00:20:14.445704   63623 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:20:14.501026   63623 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:20:14.501105   63623 ssh_runner.go:195] Run: crio --version
	I0816 00:20:14.533992   63623 ssh_runner.go:195] Run: crio --version
	I0816 00:20:14.567795   63623 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:20:10.469958   63437 main.go:141] libmachine: (pause-937923) Calling .GetIP
	I0816 00:20:10.472444   63437 main.go:141] libmachine: (pause-937923) DBG | domain pause-937923 has defined MAC address 52:54:00:a0:01:c7 in network mk-pause-937923
	I0816 00:20:10.472788   63437 main.go:141] libmachine: (pause-937923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:01:c7", ip: ""} in network mk-pause-937923: {Iface:virbr1 ExpiryTime:2024-08-16 01:18:44 +0000 UTC Type:0 Mac:52:54:00:a0:01:c7 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:pause-937923 Clientid:01:52:54:00:a0:01:c7}
	I0816 00:20:10.472820   63437 main.go:141] libmachine: (pause-937923) DBG | domain pause-937923 has defined IP address 192.168.83.162 and MAC address 52:54:00:a0:01:c7 in network mk-pause-937923
	I0816 00:20:10.473080   63437 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0816 00:20:10.477797   63437 kubeadm.go:883] updating cluster {Name:pause-937923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-937923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:20:10.477981   63437 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:20:10.478049   63437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:20:10.525709   63437 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:20:10.525735   63437 crio.go:433] Images already preloaded, skipping extraction
	I0816 00:20:10.526048   63437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:20:10.565734   63437 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:20:10.565757   63437 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:20:10.565765   63437 kubeadm.go:934] updating node { 192.168.83.162 8443 v1.31.0 crio true true} ...
	I0816 00:20:10.565890   63437 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-937923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-937923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:20:10.565954   63437 ssh_runner.go:195] Run: crio config
	I0816 00:20:10.628426   63437 cni.go:84] Creating CNI manager for ""
	I0816 00:20:10.628454   63437 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:20:10.628467   63437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:20:10.628494   63437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.162 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-937923 NodeName:pause-937923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:20:10.628689   63437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-937923"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:20:10.628762   63437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:20:10.642239   63437 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:20:10.642308   63437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:20:10.655770   63437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0816 00:20:10.676856   63437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:20:10.697309   63437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0816 00:20:10.720509   63437 ssh_runner.go:195] Run: grep 192.168.83.162	control-plane.minikube.internal$ /etc/hosts
	I0816 00:20:10.724849   63437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:20:10.877239   63437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:20:10.895117   63437 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923 for IP: 192.168.83.162
	I0816 00:20:10.895140   63437 certs.go:194] generating shared ca certs ...
	I0816 00:20:10.895159   63437 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:20:10.895329   63437 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:20:10.895381   63437 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:20:10.895394   63437 certs.go:256] generating profile certs ...
	I0816 00:20:10.895491   63437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/client.key
	I0816 00:20:10.895574   63437 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/apiserver.key.a0232ba9
	I0816 00:20:10.895625   63437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/proxy-client.key
	I0816 00:20:10.895786   63437 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:20:10.895828   63437 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:20:10.895841   63437 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:20:10.895876   63437 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:20:10.895909   63437 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:20:10.895938   63437 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:20:10.896002   63437 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:20:10.896836   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:20:10.931983   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:20:10.965483   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:20:10.996675   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:20:11.024203   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 00:20:11.053361   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:20:11.078552   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:20:11.109649   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:20:11.142823   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:20:11.173542   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:20:11.199306   63437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:20:11.224865   63437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:20:11.243341   63437 ssh_runner.go:195] Run: openssl version
	I0816 00:20:11.249435   63437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:20:11.261961   63437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:20:11.266889   63437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:20:11.266964   63437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:20:11.273059   63437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:20:11.284753   63437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:20:11.296087   63437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:20:11.300815   63437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:20:11.300866   63437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:20:11.306985   63437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:20:11.317469   63437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:20:11.329662   63437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:20:11.334767   63437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:20:11.334825   63437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:20:11.340494   63437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:20:11.350673   63437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:20:11.355825   63437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:20:11.362132   63437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:20:11.368429   63437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:20:11.374395   63437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:20:11.381220   63437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:20:11.389122   63437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:20:11.396426   63437 kubeadm.go:392] StartCluster: {Name:pause-937923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-937923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:20:11.396570   63437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:20:11.396645   63437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:20:11.440283   63437 cri.go:89] found id: "42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8"
	I0816 00:20:11.440307   63437 cri.go:89] found id: "6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e"
	I0816 00:20:11.440313   63437 cri.go:89] found id: "29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785"
	I0816 00:20:11.440318   63437 cri.go:89] found id: "c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219"
	I0816 00:20:11.440322   63437 cri.go:89] found id: "0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31"
	I0816 00:20:11.440327   63437 cri.go:89] found id: "ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596"
	I0816 00:20:11.440332   63437 cri.go:89] found id: "ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231"
	I0816 00:20:11.440336   63437 cri.go:89] found id: ""
	I0816 00:20:11.440387   63437 ssh_runner.go:195] Run: sudo runc list -f json
	I0816 00:20:11.470995   63437 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31/userdata","rootfs":"/var/lib/containers/storage/overlay/6ef05bf56ace89b331bcac0ba8e5451aee4ab5087a40382a8161285a9fdb77bb/merged","created":"2024-08-16T00:19:04.873966509Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f72d0944","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f72d0944\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-16T00:19:04.781266147Z","io.kubernetes.cri-o.Image":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.31.0","io.kubernetes.cri-o.ImageRef":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-937923\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f1d8c0bc25775ac1cb3f58858e3c1952\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-937923_f1d8c0bc25775ac1cb3f58858e3c1952/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6ef05bf56ace89b331bcac0ba8e5451aee4ab5087a40382a8161285a9fdb77bb/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-937923_kube-system_f1d8c0bc25775ac1cb3f58858e3c1952_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-937923_kube-system_f1d8c0bc25775ac1cb3f58858e3c1952_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f1d8c0bc25775ac1cb3f58858e3c1952/etc-hosts\",\"readonly\":false,\"
propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f1d8c0bc25775ac1cb3f58858e3c1952/containers/kube-apiserver/57885727\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-937923","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f1d8c0bc25775ac1cb3f58858e3c1952","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.83.162:8443","kubernetes.io/config
.hash":"f1d8c0bc25775ac1cb3f58858e3c1952","kubernetes.io/config.seen":"2024-08-16T00:19:03.836496778Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785/userdata","rootfs":"/var/lib/containers/storage/overlay/117fbd3a8cf2cf15ca2db62bc694340d3a0aac286e9b844f786c9ddd231cccd4/merged","created":"2024-08-16T00:19:15.863147426Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"78ccb3c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"78ccb3c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.cont
ainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-16T00:19:15.737026958Z","io.kubernetes.cri-o.Image":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.31.0","io.kubernetes.cri-o.ImageRef":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-fvn9w\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"723f3f40-4e4a-4a15-bfde-4ec96aa33725\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-fvn9w_723f3f40-4e4a-4a15-bfde-4ec96aa33725/kube-proxy/0.log","io.k
ubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/117fbd3a8cf2cf15ca2db62bc694340d3a0aac286e9b844f786c9ddd231cccd4/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-fvn9w_kube-system_723f3f40-4e4a-4a15-bfde-4ec96aa33725_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-fvn9w_kube-system_723f3f40-4e4a-4a15-bfde-4ec96aa33725_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"
propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/723f3f40-4e4a-4a15-bfde-4ec96aa33725/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/723f3f40-4e4a-4a15-bfde-4ec96aa33725/containers/kube-proxy/8985376b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/723f3f40-4e4a-4a15-bfde-4ec96aa33725/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/723f3f40-4e4a-4a15-bfde-4ec96aa33725/volumes/kubernetes.io~projected/kube-api-access-xfwxw\",\"readonly\":true,\
"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-fvn9w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"723f3f40-4e4a-4a15-bfde-4ec96aa33725","kubernetes.io/config.seen":"2024-08-16T00:19:15.090451620Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4/userdata","rootfs":"/var/lib/containers/storage/overlay/43fd7859066dd100b7d677785121360002776caed63d28f1b2a8d235aa63e263/merged","created":"2024-08-16T00:19:16.061052565Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-16T00:19:15.279740508Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNI
Result":"{\"cniVersion\":\"1.0.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"12:7b:0f:a3:a5:88\"},{\"name\":\"veth8db150a6\",\"mac\":\"1a:b3:a5:09:19:94\"},{\"name\":\"eth0\",\"mac\":\"36:2a:20:7b:ac:bc\",\"sandbox\":\"/var/run/netns/ecdfd0db-9c77-4c81-9c55-ec29a6c14ab4\"}],\"ips\":[{\"interface\":2,\"address\":\"10.244.0.3/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod2952e1b9-66a0-49ed-8017-750881d40fb3","io.kubernetes.cri-o.ContainerID":"3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6f6b679f8f-n6ntf_kube-system_2952e1b9-66a0-49ed-8017-750881d40fb3_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-16T00:19:15.589393389Z","io.kubernetes.cri-o.HostName":"coredns-6f6b679f8f-n6ntf","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/st
orage/overlay-containers/3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"coredns-6f6b679f8f-n6ntf","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"coredns-6f6b679f8f-n6ntf\",\"pod-template-hash\":\"6f6b679f8f\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"2952e1b9-66a0-49ed-8017-750881d40fb3\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6f6b679f8f-n6ntf_2952e1b9-66a0-49ed-8017-750881d40fb3/3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6f6b679f8f-n6ntf\",\"uid\":\"2952e1b9-66a0-49ed-8017-750881d40fb3\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/43fd7859066dd100b7d67
7785121360002776caed63d28f1b2a8d235aa63e263/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6f6b679f8f-n6ntf_kube-system_2952e1b9-66a0-49ed-8017-750881d40fb3_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"memory_limit_in_bytes\":178257920,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6f6b679f8f-n6ntf_kube-system_2952e1b9-66a0-49ed-8017-750881d40fb3_0","io.kubernetes.cri-o.SeccompProfilePath":"Runti
meDefault","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4/userdata/shm","io.kubernetes.pod.name":"coredns-6f6b679f8f-n6ntf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"2952e1b9-66a0-49ed-8017-750881d40fb3","k8s-app":"kube-dns","kubernetes.io/config.seen":"2024-08-16T00:19:15.279740508Z","kubernetes.io/config.source":"api","pod-template-hash":"6f6b679f8f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8/userdata","rootfs":"/var/lib/containers/storage/overlay/343511b231a6cda420576888543268ed8a38a418a1af3fba3147e87154a08b71/merged","created":"2024-08-16T00:19:16.288567746Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e6f52134","io.kubernetes.container.n
ame":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e6f52134\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"
30\"}","io.kubernetes.cri-o.ContainerID":"42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-16T00:19:16.183131838Z","io.kubernetes.cri-o.IP.0":"10.244.0.3","io.kubernetes.cri-o.Image":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.1","io.kubernetes.cri-o.ImageRef":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6f6b679f8f-n6ntf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2952e1b9-66a0-49ed-8017-750881d40fb3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6f6b679f8f-n6ntf_2952e1b9-66a0-49ed-8017-750881d40fb3/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/stora
ge/overlay/343511b231a6cda420576888543268ed8a38a418a1af3fba3147e87154a08b71/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6f6b679f8f-n6ntf_kube-system_2952e1b9-66a0-49ed-8017-750881d40fb3_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6f6b679f8f-n6ntf_kube-system_2952e1b9-66a0-49ed-8017-750881d40fb3_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/2952e1b9-66a0-49ed-8017-750881d40fb3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"s
elinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2952e1b9-66a0-49ed-8017-750881d40fb3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2952e1b9-66a0-49ed-8017-750881d40fb3/containers/coredns/7ef73404\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/2952e1b9-66a0-49ed-8017-750881d40fb3/volumes/kubernetes.io~projected/kube-api-access-9rvjs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-6f6b679f8f-n6ntf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2952e1b9-66a0-49ed-8017-750881d40fb3","kubernetes.io/config.seen":"2024-08-16T00:19:15.279740508Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion"
:"1.0.2-dev","id":"468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47/userdata","rootfs":"/var/lib/containers/storage/overlay/d0fdfbbb2b5afb88ffee020187c1e8fe5a8792e27892d639c07aa9804fd1ffe3/merged","created":"2024-08-16T00:19:04.643076228Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-16T00:19:03.836499048Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e14677c9318f2e39ff6fa5e4102cdc67\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pode14677c9318f2e39ff6fa5e4102cdc67","io.kubernetes.cri-o.ContainerID":"468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-pause-937923_kube-system_e14677c9318f2e39ff6f
a5e4102cdc67_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-16T00:19:04.521950541Z","io.kubernetes.cri-o.HostName":"pause-937923","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-scheduler-pause-937923","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"e14677c9318f2e39ff6fa5e4102cdc67\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-937923\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-937923_e14677c9318f2e39ff6fa5e4102cdc67/468980a301ebf7ca041641ee1276c8c5f820cdd30287
e400e116142d2339fe47.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-pause-937923\",\"uid\":\"e14677c9318f2e39ff6fa5e4102cdc67\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d0fdfbbb2b5afb88ffee020187c1e8fe5a8792e27892d639c07aa9804fd1ffe3/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-pause-937923_kube-system_e14677c9318f2e39ff6fa5e4102cdc67_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri
-o.SandboxID":"468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-937923_kube-system_e14677c9318f2e39ff6fa5e4102cdc67_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-pause-937923","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e14677c9318f2e39ff6fa5e4102cdc67","kubernetes.io/config.hash":"e14677c9318f2e39ff6fa5e4102cdc67","kubernetes.io/config.seen":"2024-08-16T00:19:03.836499048Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e/userdata","root
fs":"/var/lib/containers/storage/overlay/d845e0fb67e2eddb8466c563cc81b50a85d388568baa135c6d5af7fb7ede8e2b/merged","created":"2024-08-16T00:19:16.156718652Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e6f52134","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e6f52134\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\"
:9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-16T00:19:16.103261224Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.11.1","io.kubernetes.cri-o.ImageRef":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6f6b679f8f-q2x2q\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"064eb87c-e7df-4a1b-
840e-bcb9cca1c05b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6f6b679f8f-q2x2q_064eb87c-e7df-4a1b-840e-bcb9cca1c05b/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d845e0fb67e2eddb8466c563cc81b50a85d388568baa135c6d5af7fb7ede8e2b/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6f6b679f8f-q2x2q_kube-system_064eb87c-e7df-4a1b-840e-bcb9cca1c05b_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6f6b679f8f-q2x2q_kube-system_064eb87c-e7df-4a1b-840e-bcb9cca1c05b_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"f
alse","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/064eb87c-e7df-4a1b-840e-bcb9cca1c05b/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/064eb87c-e7df-4a1b-840e-bcb9cca1c05b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/064eb87c-e7df-4a1b-840e-bcb9cca1c05b/containers/coredns/a3251f59\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/064eb87c-e7df-4a1b-840e-bcb9cca1c05b/volumes/kubernetes.io~projected/kube-api-access-xrhgm\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-6f6b679f8f-q2x2q","io.kuberne
tes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"064eb87c-e7df-4a1b-840e-bcb9cca1c05b","kubernetes.io/config.seen":"2024-08-16T00:19:15.245173518Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e/userdata","rootfs":"/var/lib/containers/storage/overlay/ab90a1cdf7f6d5901cb352af0b6d0c3c0d20c02f430259e59d4afebeeecc53ec/merged","created":"2024-08-16T00:19:15.56595015Z","annotations":{"controller-revision-hash":"5976bc5f75","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-16T00:19:15.090451620Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/besteffort/pod723f3f40-4e4a-4a15-bfde-4e
c96aa33725","io.kubernetes.cri-o.ContainerID":"717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-fvn9w_kube-system_723f3f40-4e4a-4a15-bfde-4ec96aa33725_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-16T00:19:15.413341147Z","io.kubernetes.cri-o.HostName":"pause-937923","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-proxy-fvn9w","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"723f3f40-4e4a-4a15-bfde-4ec96aa33725\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-fvn9w\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kube-proxy\",\"controlle
r-revision-hash\":\"5976bc5f75\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-fvn9w_723f3f40-4e4a-4a15-bfde-4ec96aa33725/717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-fvn9w\",\"uid\":\"723f3f40-4e4a-4a15-bfde-4ec96aa33725\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ab90a1cdf7f6d5901cb352af0b6d0c3c0d20c02f430259e59d4afebeeecc53ec/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-fvn9w_kube-system_723f3f40-4e4a-4a15-bfde-4ec96aa33725_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":2,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kuberne
tes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-fvn9w_kube-system_723f3f40-4e4a-4a15-bfde-4ec96aa33725_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e/userdata/shm","io.kubernetes.pod.name":"kube-proxy-fvn9w","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"723f3f40-4e4a-4a15-bfde-4ec96aa33725","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2024-08-16T00:19:15.090451620Z","kubernetes.io/config.source":"api","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d51
1622c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c/userdata","rootfs":"/var/lib/containers/storage/overlay/5f3abe2ad3fa10a3f20f012b232f89348d3aea3bf1cbcc3ef93c1dd95a0156a6/merged","created":"2024-08-16T00:19:04.62631229Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"f1d8c0bc25775ac1cb3f58858e3c1952\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.83.162:8443\",\"kubernetes.io/config.seen\":\"2024-08-16T00:19:03.836496778Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podf1d8c0bc25775ac1cb3f58858e3c1952","io.kubernetes.cri-o.ContainerID":"8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-pause-937923_kube-system_f1d8c0b
c25775ac1cb3f58858e3c1952_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-16T00:19:04.519775932Z","io.kubernetes.cri-o.HostName":"pause-937923","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-apiserver-pause-937923","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"f1d8c0bc25775ac1cb3f58858e3c1952\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-937923\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-937923_f1d8c0bc25775ac1cb3f58858e3c1952/8c731e1b6be25136d847ce94bd21600
3b893cfda5fa1998a535e2d11d511622c.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-pause-937923\",\"uid\":\"f1d8c0bc25775ac1cb3f58858e3c1952\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5f3abe2ad3fa10a3f20f012b232f89348d3aea3bf1cbcc3ef93c1dd95a0156a6/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-pause-937923_kube-system_f1d8c0bc25775ac1cb3f58858e3c1952_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.k
ubernetes.cri-o.SandboxID":"8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-937923_kube-system_f1d8c0bc25775ac1cb3f58858e3c1952_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-pause-937923","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f1d8c0bc25775ac1cb3f58858e3c1952","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.83.162:8443","kubernetes.io/config.hash":"f1d8c0bc25775ac1cb3f58858e3c1952","kubernetes.io/config.seen":"2024-08-16T00:19:03.836496778Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc","pid":0,"status":"stopped","bundle":"/run/containers/storag
e/overlay-containers/ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc/userdata","rootfs":"/var/lib/containers/storage/overlay/43305e0fe40c9392d77c3662bdb0227c4a5c4a6dbdac9d5dd6a473cfc4b1ca49/merged","created":"2024-08-16T00:19:04.607308366Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.83.162:2379\",\"kubernetes.io/config.seen\":\"2024-08-16T00:19:03.836493571Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"e0210f37bcb6847db96686ce394c4adf\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pode0210f37bcb6847db96686ce394c4adf","io.kubernetes.cri-o.ContainerID":"ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-pause-937923_kube-system_e0210f37bcb6847db96686ce394c4adf_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cr
i-o.Created":"2024-08-16T00:19:04.525205426Z","io.kubernetes.cri-o.HostName":"pause-937923","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"etcd-pause-937923","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"e0210f37bcb6847db96686ce394c4adf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-pause-937923\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-937923_e0210f37bcb6847db96686ce394c4adf/ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-pause-937923\",\"uid\":\"e0210f37bcb6847db96
686ce394c4adf\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/43305e0fe40c9392d77c3662bdb0227c4a5c4a6dbdac9d5dd6a473cfc4b1ca49/merged","io.kubernetes.cri-o.Name":"k8s_etcd-pause-937923_kube-system_e0210f37bcb6847db96686ce394c4adf_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-937923_k
ube-system_e0210f37bcb6847db96686ce394c4adf_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc/userdata/shm","io.kubernetes.pod.name":"etcd-pause-937923","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"e0210f37bcb6847db96686ce394c4adf","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.83.162:2379","kubernetes.io/config.hash":"e0210f37bcb6847db96686ce394c4adf","kubernetes.io/config.seen":"2024-08-16T00:19:03.836493571Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231/userdata","rootfs":"/var/lib/containers/storage/overlay/1afd9e037cf14b568e6e87ac684c945e24765
4a248a9ff7f71dd94b247bdbd12/merged","created":"2024-08-16T00:19:04.760956422Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3994b1a4","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3994b1a4\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-16T00:19:04.681007586Z","io.kubernetes.cri-o.Image":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1"
,"io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.31.0","io.kubernetes.cri-o.ImageRef":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-937923\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8d21e76056ed2ea3ddf9c0da2f340d1b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-937923_8d21e76056ed2ea3ddf9c0da2f340d1b/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1afd9e037cf14b568e6e87ac684c945e247654a248a9ff7f71dd94b247bdbd12/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-937923_kube-system_8d21e76056ed2ea3ddf9c0da2f340d1b_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kuberne
tes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-937923_kube-system_8d21e76056ed2ea3ddf9c0da2f340d1b_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8d21e76056ed2ea3ddf9c0da2f340d1b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8d21e76056ed2ea3ddf9c0da2f340d1b/containers/kube-controller-manager/a1e51137\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs
\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-937923","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8d21e76056ed2ea3ddf9c0da2f340d1b","
kubernetes.io/config.hash":"8d21e76056ed2ea3ddf9c0da2f340d1b","kubernetes.io/config.seen":"2024-08-16T00:19:03.836497914Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596/userdata","rootfs":"/var/lib/containers/storage/overlay/c8d7cd5505b18229c4739a9d2e604520244b413967cb83513935c5faa43e160b/merged","created":"2024-08-16T00:19:04.833667761Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cdf7d3fa","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cdf7d3fa\",\"io.kubernetes.container.restartCount\":\"0\",\"io
.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-16T00:19:04.744260474Z","io.kubernetes.cri-o.Image":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri-o.ImageRef":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-937923\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e0210f37bcb6847db96686ce394c4adf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-937923_e0210f37bcb6847db96686ce394c4adf/etcd/0.log","io.kubernet
es.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c8d7cd5505b18229c4739a9d2e604520244b413967cb83513935c5faa43e160b/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-937923_kube-system_e0210f37bcb6847db96686ce394c4adf_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-937923_kube-system_e0210f37bcb6847db96686ce394c4adf_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e0210f37bcb6847db96686ce394c4adf/etc-hosts\",\"
readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e0210f37bcb6847db96686ce394c4adf/containers/etcd/b570af4c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-937923","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e0210f37bcb6847db96686ce394c4adf","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.83.162:2379","kubernetes.io/config.hash":"e0210f37bcb6847db96686ce394c4adf","kubernetes.io/config.seen":"2024-08-16T00:19:03.836493571Z","kubernetes.io/config.source":"file"
},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219/userdata","rootfs":"/var/lib/containers/storage/overlay/be8bbda93bc76f13c5af8d64b479aa9a9edab53b9684f1d6413a69da45aa883a/merged","created":"2024-08-16T00:19:04.92273516Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f8fb4364","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f8fb4364\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.
pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-08-16T00:19:04.793429012Z","io.kubernetes.cri-o.Image":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.31.0","io.kubernetes.cri-o.ImageRef":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-937923\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e14677c9318f2e39ff6fa5e4102cdc67\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-937923_e14677c9318f2e39ff6fa5e4102cdc67/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/contain
ers/storage/overlay/be8bbda93bc76f13c5af8d64b479aa9a9edab53b9684f1d6413a69da45aa883a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-937923_kube-system_e14677c9318f2e39ff6fa5e4102cdc67_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-937923_kube-system_e14677c9318f2e39ff6fa5e4102cdc67_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e14677c9318f2e39ff6fa5e4102cdc67/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}
,{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e14677c9318f2e39ff6fa5e4102cdc67/containers/kube-scheduler/4157121a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-937923","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e14677c9318f2e39ff6fa5e4102cdc67","kubernetes.io/config.hash":"e14677c9318f2e39ff6fa5e4102cdc67","kubernetes.io/config.seen":"2024-08-16T00:19:03.836499048Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140/userdata",
"rootfs":"/var/lib/containers/storage/overlay/4cef2d8a9bb67161e5dc78542ff83a602674524ab2902882f1725c313b729b11/merged","created":"2024-08-16T00:19:16.005478689Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-08-16T00:19:15.245173518Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"1.0.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"12:7b:0f:a3:a5:88\"},{\"name\":\"veth91485e8c\",\"mac\":\"2a:44:b5:f2:0b:79\"},{\"name\":\"eth0\",\"mac\":\"6a:ea:17:32:e6:21\",\"sandbox\":\"/var/run/netns/a7783b4e-e2e3-4b10-bac2-9f30b22cc179\"}],\"ips\":[{\"interface\":2,\"address\":\"10.244.0.2/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244.0.1\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod064eb87c-e7df-4a1b-840e-bcb9cca1c05b","io.kubernetes.cri-o.ContainerID":"f86276a9514bd39776085963912a9174216a24447d5
f90ea159638f815a6e140","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6f6b679f8f-q2x2q_kube-system_064eb87c-e7df-4a1b-840e-bcb9cca1c05b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-08-16T00:19:15.566635886Z","io.kubernetes.cri-o.HostName":"coredns-6f6b679f8f-q2x2q","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"coredns-6f6b679f8f-q2x2q","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6f6b679f8f-q2x2q\",\"k8s-app\":\"kube-dns\",\"pod-template-hash\":\"6f6b679f8f\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"064eb87c-e7df-4a1b-840e-bcb9cca1c05b\"}","io.kubernetes.cri-o
.LogPath":"/var/log/pods/kube-system_coredns-6f6b679f8f-q2x2q_064eb87c-e7df-4a1b-840e-bcb9cca1c05b/f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6f6b679f8f-q2x2q\",\"uid\":\"064eb87c-e7df-4a1b-840e-bcb9cca1c05b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4cef2d8a9bb67161e5dc78542ff83a602674524ab2902882f1725c313b729b11/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6f6b679f8f-q2x2q_kube-system_064eb87c-e7df-4a1b-840e-bcb9cca1c05b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"memory_limit_in_bytes\":178257920,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/stor
age/overlay-containers/f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6f6b679f8f-q2x2q_kube-system_064eb87c-e7df-4a1b-840e-bcb9cca1c05b_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140/userdata/shm","io.kubernetes.pod.name":"coredns-6f6b679f8f-q2x2q","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"064eb87c-e7df-4a1b-840e-bcb9cca1c05b","k8s-app":"kube-dns","kubernetes.io/config.seen":"2024-08-16T00:19:15.245173518Z","kubernetes.io/config.source":"api","pod-template-hash":"6f6b679f8f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b","pid":0,"status":"s
topped","bundle":"/run/containers/storage/overlay-containers/ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b/userdata","rootfs":"/var/lib/containers/storage/overlay/15ecefaaace53c3d6377ef6ac5059c0956f93318a109fccc146a0fb4c10fe123/merged","created":"2024-08-16T00:19:04.599700708Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"8d21e76056ed2ea3ddf9c0da2f340d1b\",\"kubernetes.io/config.seen\":\"2024-08-16T00:19:03.836497914Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod8d21e76056ed2ea3ddf9c0da2f340d1b","io.kubernetes.cri-o.ContainerID":"ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-pause-937923_kube-system_8d21e76056ed2ea3ddf9c0da2f340d1b_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Cre
ated":"2024-08-16T00:19:04.515158752Z","io.kubernetes.cri-o.HostName":"pause-937923","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.10","io.kubernetes.cri-o.KubeName":"kube-controller-manager-pause-937923","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"8d21e76056ed2ea3ddf9c0da2f340d1b\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-937923\",\"component\":\"kube-controller-manager\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-937923_8d21e76056ed2ea3ddf9c0da2f340d1b/ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b.log","io.kubernetes.cri-o.Met
adata":"{\"name\":\"kube-controller-manager-pause-937923\",\"uid\":\"8d21e76056ed2ea3ddf9c0da2f340d1b\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/15ecefaaace53c3d6377ef6ac5059c0956f93318a109fccc146a0fb4c10fe123/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-pause-937923_kube-system_8d21e76056ed2ea3ddf9c0da2f340d1b_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"ffd40e0592a81fc20
001b8baad8edf1173720aef675e45401da4e972f007627b","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-937923_kube-system_8d21e76056ed2ea3ddf9c0da2f340d1b_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-pause-937923","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8d21e76056ed2ea3ddf9c0da2f340d1b","kubernetes.io/config.hash":"8d21e76056ed2ea3ddf9c0da2f340d1b","kubernetes.io/config.seen":"2024-08-16T00:19:03.836497914Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"}]
	I0816 00:20:11.471915   63437 cri.go:126] list returned 14 containers
	I0816 00:20:11.471942   63437 cri.go:129] container: {ID:0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31 Status:stopped}
	I0816 00:20:11.471985   63437 cri.go:135] skipping {0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31 stopped}: state = "stopped", want "paused"
	I0816 00:20:11.472000   63437 cri.go:129] container: {ID:29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785 Status:stopped}
	I0816 00:20:11.472007   63437 cri.go:135] skipping {29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785 stopped}: state = "stopped", want "paused"
	I0816 00:20:11.472016   63437 cri.go:129] container: {ID:3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4 Status:stopped}
	I0816 00:20:11.472025   63437 cri.go:131] skipping 3448212c88bf2642988fe4c2f93b077aa47f42ab84eec6b1a4e48db659a4bdf4 - not in ps
	I0816 00:20:11.472034   63437 cri.go:129] container: {ID:42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8 Status:stopped}
	I0816 00:20:11.472044   63437 cri.go:135] skipping {42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8 stopped}: state = "stopped", want "paused"
	I0816 00:20:11.472054   63437 cri.go:129] container: {ID:468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47 Status:stopped}
	I0816 00:20:11.472061   63437 cri.go:131] skipping 468980a301ebf7ca041641ee1276c8c5f820cdd30287e400e116142d2339fe47 - not in ps
	I0816 00:20:11.472070   63437 cri.go:129] container: {ID:6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e Status:stopped}
	I0816 00:20:11.472086   63437 cri.go:135] skipping {6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e stopped}: state = "stopped", want "paused"
	I0816 00:20:11.472095   63437 cri.go:129] container: {ID:717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e Status:stopped}
	I0816 00:20:11.472103   63437 cri.go:131] skipping 717d6e53898aa1738ec8fc02c8da51a817357d8e7c8bf2a15374bbb74baff94e - not in ps
	I0816 00:20:11.472111   63437 cri.go:129] container: {ID:8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c Status:stopped}
	I0816 00:20:11.472121   63437 cri.go:131] skipping 8c731e1b6be25136d847ce94bd216003b893cfda5fa1998a535e2d11d511622c - not in ps
	I0816 00:20:11.472129   63437 cri.go:129] container: {ID:ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc Status:stopped}
	I0816 00:20:11.472138   63437 cri.go:131] skipping ac25e9cdd706112873eb7973691b24f54f9f0b93f5e87fd4b103ab14924880dc - not in ps
	I0816 00:20:11.472146   63437 cri.go:129] container: {ID:ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231 Status:stopped}
	I0816 00:20:11.472158   63437 cri.go:135] skipping {ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231 stopped}: state = "stopped", want "paused"
	I0816 00:20:11.472171   63437 cri.go:129] container: {ID:ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596 Status:stopped}
	I0816 00:20:11.472181   63437 cri.go:135] skipping {ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596 stopped}: state = "stopped", want "paused"
	I0816 00:20:11.472187   63437 cri.go:129] container: {ID:c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219 Status:stopped}
	I0816 00:20:11.472197   63437 cri.go:135] skipping {c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219 stopped}: state = "stopped", want "paused"
	I0816 00:20:11.472206   63437 cri.go:129] container: {ID:f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140 Status:stopped}
	I0816 00:20:11.472214   63437 cri.go:131] skipping f86276a9514bd39776085963912a9174216a24447d5f90ea159638f815a6e140 - not in ps
	I0816 00:20:11.472223   63437 cri.go:129] container: {ID:ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b Status:stopped}
	I0816 00:20:11.472231   63437 cri.go:131] skipping ffd40e0592a81fc20001b8baad8edf1173720aef675e45401da4e972f007627b - not in ps
	I0816 00:20:11.472286   63437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:20:11.483742   63437 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:20:11.483763   63437 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:20:11.483822   63437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:20:11.494326   63437 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:20:11.495386   63437 kubeconfig.go:125] found "pause-937923" server: "https://192.168.83.162:8443"
	I0816 00:20:11.496843   63437 kapi.go:59] client config for pause-937923: &rest.Config{Host:"https://192.168.83.162:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/client.crt", KeyFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/profiles/pause-937923/client.key", CAFile:"/home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]st
ring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 00:20:11.497541   63437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:20:11.509073   63437 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.162
	I0816 00:20:11.509099   63437 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:20:11.509110   63437 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:20:11.509158   63437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:20:11.552475   63437 cri.go:89] found id: "42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8"
	I0816 00:20:11.552497   63437 cri.go:89] found id: "6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e"
	I0816 00:20:11.552501   63437 cri.go:89] found id: "29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785"
	I0816 00:20:11.552504   63437 cri.go:89] found id: "c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219"
	I0816 00:20:11.552518   63437 cri.go:89] found id: "0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31"
	I0816 00:20:11.552521   63437 cri.go:89] found id: "ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596"
	I0816 00:20:11.552524   63437 cri.go:89] found id: "ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231"
	I0816 00:20:11.552526   63437 cri.go:89] found id: ""
	I0816 00:20:11.552531   63437 cri.go:252] Stopping containers: [42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8 6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e 29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785 c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219 0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31 ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596 ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231]
	I0816 00:20:11.552578   63437 ssh_runner.go:195] Run: which crictl
	I0816 00:20:11.557006   63437 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 42d9216cb7a9cfbe9de0233ed9d993fd1de9980fc7de41c3dbb58618771348e8 6e6d645dc3c2921525716d0aef748c8529727c3d84d149d079635e6f458d0e1e 29a242cef9217bcc0a258f9b1596207d08e5716c4eb5c07f47e3db991670e785 c979bb729012e1480cd44397b1c1fa6270197f7e3984ec67df120193a4ffa219 0ead2d8705d53f50daee1fc2c64295c628926d508bae5d339c8a5215627feb31 ae903c3a1050938022922edb0a7603f28f0560a2cfa40de63a678c65fe238596 ae8d9ecc4085dd979d8acb92679581a30c305524bca008e1b8de66a5f4a4a231
	I0816 00:20:11.634673   63437 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:20:11.684942   63437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:20:11.697448   63437 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug 16 00:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug 16 00:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Aug 16 00:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Aug 16 00:19 /etc/kubernetes/scheduler.conf
	
	I0816 00:20:11.697519   63437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:20:11.707749   63437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:20:11.717497   63437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:20:11.727821   63437 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:20:11.727887   63437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:20:11.738536   63437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:20:11.751278   63437 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:20:11.751335   63437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:20:11.764377   63437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:20:11.775311   63437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:20:11.837590   63437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:20:12.679042   63437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:20:12.920668   63437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:20:12.997872   63437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:20:13.156534   63437 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:20:13.156625   63437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:20:13.657048   63437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:20:14.156697   63437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:20:14.213623   63437 api_server.go:72] duration metric: took 1.057101955s to wait for apiserver process to appear ...
	I0816 00:20:14.213650   63437 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:20:14.213670   63437 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0816 00:20:14.568945   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) Calling .GetIP
	I0816 00:20:14.571809   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:14.572187   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:65:e8", ip: ""} in network mk-kubernetes-upgrade-165951: {Iface:virbr4 ExpiryTime:2024-08-16 01:19:43 +0000 UTC Type:0 Mac:52:54:00:7e:65:e8 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:kubernetes-upgrade-165951 Clientid:01:52:54:00:7e:65:e8}
	I0816 00:20:14.572218   63623 main.go:141] libmachine: (kubernetes-upgrade-165951) DBG | domain kubernetes-upgrade-165951 has defined IP address 192.168.72.157 and MAC address 52:54:00:7e:65:e8 in network mk-kubernetes-upgrade-165951
	I0816 00:20:14.572386   63623 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:20:14.577235   63623 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-165951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:20:14.577366   63623 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:20:14.577431   63623 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:20:14.628710   63623 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:20:14.628750   63623 crio.go:433] Images already preloaded, skipping extraction
	I0816 00:20:14.628799   63623 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:20:14.668312   63623 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:20:14.668337   63623 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:20:14.668345   63623 kubeadm.go:934] updating node { 192.168.72.157 8443 v1.31.0 crio true true} ...
	I0816 00:20:14.668473   63623 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-165951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:20:14.668563   63623 ssh_runner.go:195] Run: crio config
	I0816 00:20:14.728974   63623 cni.go:84] Creating CNI manager for ""
	I0816 00:20:14.729003   63623 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:20:14.729019   63623 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:20:14.729054   63623 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-165951 NodeName:kubernetes-upgrade-165951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:20:14.729227   63623 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-165951"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:20:14.729308   63623 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:20:14.740990   63623 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:20:14.741076   63623 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:20:14.751461   63623 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0816 00:20:14.771242   63623 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:20:14.789925   63623 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0816 00:20:14.807637   63623 ssh_runner.go:195] Run: grep 192.168.72.157	control-plane.minikube.internal$ /etc/hosts
	I0816 00:20:14.812240   63623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:20:14.940102   63623 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:20:14.956041   63623 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951 for IP: 192.168.72.157
	I0816 00:20:14.956072   63623 certs.go:194] generating shared ca certs ...
	I0816 00:20:14.956091   63623 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:20:14.956241   63623 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:20:14.956300   63623 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:20:14.956313   63623 certs.go:256] generating profile certs ...
	I0816 00:20:14.956435   63623 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/client.key
	I0816 00:20:14.956519   63623 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.key.869b6558
	I0816 00:20:14.956569   63623 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.key
	I0816 00:20:14.956724   63623 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:20:14.956760   63623 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:20:14.956769   63623 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:20:14.956798   63623 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:20:14.956821   63623 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:20:14.956860   63623 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:20:14.956896   63623 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:20:14.957608   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:20:14.984357   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:20:15.011383   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:20:15.038892   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:20:15.499532   62772 pod_ready.go:103] pod "coredns-6f6b679f8f-wmmtl" in "kube-system" namespace has status "Ready":"False"
	I0816 00:20:17.501961   62772 pod_ready.go:103] pod "coredns-6f6b679f8f-wmmtl" in "kube-system" namespace has status "Ready":"False"
	I0816 00:20:17.423804   63437 api_server.go:279] https://192.168.83.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:20:17.423835   63437 api_server.go:103] status: https://192.168.83.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:20:17.423850   63437 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0816 00:20:17.449073   63437 api_server.go:279] https://192.168.83.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:20:17.449117   63437 api_server.go:103] status: https://192.168.83.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:20:17.714456   63437 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0816 00:20:17.718858   63437 api_server.go:279] https://192.168.83.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:20:17.718885   63437 api_server.go:103] status: https://192.168.83.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:20:18.214008   63437 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0816 00:20:18.222706   63437 api_server.go:279] https://192.168.83.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:20:18.222735   63437 api_server.go:103] status: https://192.168.83.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:20:18.714359   63437 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0816 00:20:18.721929   63437 api_server.go:279] https://192.168.83.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:20:18.721972   63437 api_server.go:103] status: https://192.168.83.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:20:19.214284   63437 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0816 00:20:19.219514   63437 api_server.go:279] https://192.168.83.162:8443/healthz returned 200:
	ok
	I0816 00:20:19.236561   63437 api_server.go:141] control plane version: v1.31.0
	I0816 00:20:19.236590   63437 api_server.go:131] duration metric: took 5.022934s to wait for apiserver health ...
	I0816 00:20:19.236599   63437 cni.go:84] Creating CNI manager for ""
	I0816 00:20:19.236605   63437 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:20:19.238400   63437 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:20:19.239564   63437 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:20:19.254771   63437 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:20:19.278805   63437 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:20:19.278899   63437 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 00:20:19.278920   63437 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 00:20:19.293244   63437 system_pods.go:59] 6 kube-system pods found
	I0816 00:20:19.293287   63437 system_pods.go:61] "coredns-6f6b679f8f-n6ntf" [2952e1b9-66a0-49ed-8017-750881d40fb3] Running
	I0816 00:20:19.293299   63437 system_pods.go:61] "etcd-pause-937923" [cd279cfa-466d-4da1-8f5e-bfbc8da754ac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:20:19.293309   63437 system_pods.go:61] "kube-apiserver-pause-937923" [7fa422c8-e3e1-422e-8b7c-34e6061f3262] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:20:19.293320   63437 system_pods.go:61] "kube-controller-manager-pause-937923" [224a4f15-8619-43cb-806b-2f80e90b8f9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:20:19.293331   63437 system_pods.go:61] "kube-proxy-fvn9w" [723f3f40-4e4a-4a15-bfde-4ec96aa33725] Running
	I0816 00:20:19.293339   63437 system_pods.go:61] "kube-scheduler-pause-937923" [079ab189-71b5-496f-8361-8b67215a775e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:20:19.293348   63437 system_pods.go:74] duration metric: took 14.523292ms to wait for pod list to return data ...
	I0816 00:20:19.293359   63437 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:20:19.298885   63437 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:20:19.298912   63437 node_conditions.go:123] node cpu capacity is 2
	I0816 00:20:19.298924   63437 node_conditions.go:105] duration metric: took 5.558929ms to run NodePressure ...
	I0816 00:20:19.298942   63437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:20:19.559104   63437 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:20:19.565014   63437 kubeadm.go:739] kubelet initialised
	I0816 00:20:19.565038   63437 kubeadm.go:740] duration metric: took 5.91191ms waiting for restarted kubelet to initialise ...
	I0816 00:20:19.565048   63437 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:20:19.569586   63437 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-n6ntf" in "kube-system" namespace to be "Ready" ...
	I0816 00:20:19.575386   63437 pod_ready.go:93] pod "coredns-6f6b679f8f-n6ntf" in "kube-system" namespace has status "Ready":"True"
	I0816 00:20:19.575408   63437 pod_ready.go:82] duration metric: took 5.798303ms for pod "coredns-6f6b679f8f-n6ntf" in "kube-system" namespace to be "Ready" ...
	I0816 00:20:19.575419   63437 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-937923" in "kube-system" namespace to be "Ready" ...
	I0816 00:20:15.072649   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0816 00:20:15.152241   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:20:15.235526   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:20:15.349438   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kubernetes-upgrade-165951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:20:15.438143   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:20:15.462583   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:20:15.487802   63623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:20:15.517342   63623 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:20:15.541957   63623 ssh_runner.go:195] Run: openssl version
	I0816 00:20:15.549396   63623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:20:15.562964   63623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:20:15.567869   63623 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:20:15.567920   63623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:20:15.574106   63623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:20:15.585406   63623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:20:15.596857   63623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:20:15.601956   63623 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:20:15.602016   63623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:20:15.609505   63623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:20:15.620433   63623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:20:15.633397   63623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:20:15.638427   63623 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:20:15.638497   63623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:20:15.644392   63623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:20:15.654415   63623 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:20:15.659442   63623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:20:15.665538   63623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:20:15.671875   63623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:20:15.678145   63623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:20:15.684144   63623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:20:15.690116   63623 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:20:15.695689   63623 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-165951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-165951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:20:15.695765   63623 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:20:15.695814   63623 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:20:15.734357   63623 cri.go:89] found id: "bf1d30f907c2154716175aedf0d6697d255dbc797688819af864d0a270dbe7b4"
	I0816 00:20:15.734381   63623 cri.go:89] found id: "084f91112d945526de57d9a9e2058ea5d56c387418461c1f704bccb82d4a95c5"
	I0816 00:20:15.734387   63623 cri.go:89] found id: "7309973a380c71cff9639b67ab8d02337f5f2fb85e202e4b2515b7f6eaab5f58"
	I0816 00:20:15.734398   63623 cri.go:89] found id: "325ccf6732d9bfb6a2b7d01fa2d511ea94e929a654ba2098edefae87b50704b4"
	I0816 00:20:15.734401   63623 cri.go:89] found id: ""
	I0816 00:20:15.734442   63623 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 16 00:20:24 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:24.973679043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723767624973601270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=058fd672-b9aa-456a-b0df-729301305d38 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:20:24 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:24.974685709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ed98ac9-5378-414c-b93d-b227a7e1e2c1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:24 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:24.974818732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ed98ac9-5378-414c-b93d-b227a7e1e2c1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:24 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:24.975081709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7f05d0e0ab33c834399e529c3b71a160ff0d7f9e1f916d48e212c197b6f22c9,PodSandboxId:07dd12d8a1ac799f078273ebc5613aba2b20cdad08f1f51de188b765eb40ff5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723767618015563484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddeb173c72c9e7a28ee42c238eba565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004e4584a2c3b05123e7404fa039f90eccfdd626d50c3d11c3c40ca81c6f7249,PodSandboxId:00e8d6fba91e94a56a80a19a640db40b8e476c11306877b774ea10a042755097,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723767618017637313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b634b82a8d616cbeb7ab54d1503e680,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbe4389b3425f30178170c19d9e988d58cbb7c4f1b51e056146ec3469c88c5e,PodSandboxId:40f1b770c80ccc63a3196e5afe9a07c2b9fcf58ab62dd6f5203776ed0b092b6a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723767617999374987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e2f8e7fbaff5af59aa1d84a51d2105,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad667bf2761ce3e9238754390108ecca9e077f669db79cded231f7a92ca55f0c,PodSandboxId:221eeb083795536db22980e584e1cf6a8fa7bc10a6124faffe24ab92b38b2679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723767618010249145,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c625a9ebc8379258593aed874d21eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1d30f907c2154716175aedf0d6697d255dbc797688819af864d0a270dbe7b4,PodSandboxId:b09daa04803ec8db34bd59317ea0ed3abb0ed9edf39f85cb709f3bfe4d153e88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723767612356382902,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddeb173c72c9e7a28ee42c238eba565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:084f91112d945526de57d9a9e2058ea5d56c387418461c1f704bccb82d4a95c5,PodSandboxId:88bba1c7c50ad4e53fc3ab521543327b8ce5485b63bdcdb54c8a0c091c65b8f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723767612350032665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e2f8e7fbaff5af59aa1d84a51d2105,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7309973a380c71cff9639b67ab8d02337f5f2fb85e202e4b2515b7f6eaab5f58,PodSandboxId:10393b8670ca7082760d969594e63df87545c902db3bff8e958744622c9d31f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723767612306559789,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b634b82a8d616cbeb7ab54d1503e680,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ccf6732d9bfb6a2b7d01fa2d511ea94e929a654ba2098edefae87b50704b4,PodSandboxId:f04814c7737e4d784fe8f1ae281be2c487180c46272bd9233cba243b12e4ae59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723767612279853403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c625a9ebc8379258593aed874d21eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ed98ac9-5378-414c-b93d-b227a7e1e2c1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.022442478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b7522cd-44b6-4070-af31-2a48672a8dd3 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.022517525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b7522cd-44b6-4070-af31-2a48672a8dd3 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.023907130Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1164ce37-de91-4c06-90c1-7b995df3e53e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.024647103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723767625024618390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1164ce37-de91-4c06-90c1-7b995df3e53e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.025515426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65b0d13a-d753-4e0b-ac21-8e4b986c5fd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.025583050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65b0d13a-d753-4e0b-ac21-8e4b986c5fd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.025840701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7f05d0e0ab33c834399e529c3b71a160ff0d7f9e1f916d48e212c197b6f22c9,PodSandboxId:07dd12d8a1ac799f078273ebc5613aba2b20cdad08f1f51de188b765eb40ff5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723767618015563484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddeb173c72c9e7a28ee42c238eba565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004e4584a2c3b05123e7404fa039f90eccfdd626d50c3d11c3c40ca81c6f7249,PodSandboxId:00e8d6fba91e94a56a80a19a640db40b8e476c11306877b774ea10a042755097,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723767618017637313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b634b82a8d616cbeb7ab54d1503e680,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbe4389b3425f30178170c19d9e988d58cbb7c4f1b51e056146ec3469c88c5e,PodSandboxId:40f1b770c80ccc63a3196e5afe9a07c2b9fcf58ab62dd6f5203776ed0b092b6a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723767617999374987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e2f8e7fbaff5af59aa1d84a51d2105,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad667bf2761ce3e9238754390108ecca9e077f669db79cded231f7a92ca55f0c,PodSandboxId:221eeb083795536db22980e584e1cf6a8fa7bc10a6124faffe24ab92b38b2679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723767618010249145,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c625a9ebc8379258593aed874d21eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1d30f907c2154716175aedf0d6697d255dbc797688819af864d0a270dbe7b4,PodSandboxId:b09daa04803ec8db34bd59317ea0ed3abb0ed9edf39f85cb709f3bfe4d153e88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723767612356382902,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddeb173c72c9e7a28ee42c238eba565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:084f91112d945526de57d9a9e2058ea5d56c387418461c1f704bccb82d4a95c5,PodSandboxId:88bba1c7c50ad4e53fc3ab521543327b8ce5485b63bdcdb54c8a0c091c65b8f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723767612350032665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e2f8e7fbaff5af59aa1d84a51d2105,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7309973a380c71cff9639b67ab8d02337f5f2fb85e202e4b2515b7f6eaab5f58,PodSandboxId:10393b8670ca7082760d969594e63df87545c902db3bff8e958744622c9d31f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723767612306559789,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b634b82a8d616cbeb7ab54d1503e680,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ccf6732d9bfb6a2b7d01fa2d511ea94e929a654ba2098edefae87b50704b4,PodSandboxId:f04814c7737e4d784fe8f1ae281be2c487180c46272bd9233cba243b12e4ae59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723767612279853403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c625a9ebc8379258593aed874d21eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65b0d13a-d753-4e0b-ac21-8e4b986c5fd1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.079130489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e951743c-60df-41d1-8b14-ac6c2927f7aa name=/runtime.v1.RuntimeService/Version
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.079238693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e951743c-60df-41d1-8b14-ac6c2927f7aa name=/runtime.v1.RuntimeService/Version
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.081118097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bec8e2d-be81-4253-b31d-6e02ef13ff85 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.081642389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723767625081606378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bec8e2d-be81-4253-b31d-6e02ef13ff85 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.082384944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d765069-d16a-4630-a8b3-eb4394f81d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.082458249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d765069-d16a-4630-a8b3-eb4394f81d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.082810359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7f05d0e0ab33c834399e529c3b71a160ff0d7f9e1f916d48e212c197b6f22c9,PodSandboxId:07dd12d8a1ac799f078273ebc5613aba2b20cdad08f1f51de188b765eb40ff5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723767618015563484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddeb173c72c9e7a28ee42c238eba565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004e4584a2c3b05123e7404fa039f90eccfdd626d50c3d11c3c40ca81c6f7249,PodSandboxId:00e8d6fba91e94a56a80a19a640db40b8e476c11306877b774ea10a042755097,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723767618017637313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b634b82a8d616cbeb7ab54d1503e680,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbe4389b3425f30178170c19d9e988d58cbb7c4f1b51e056146ec3469c88c5e,PodSandboxId:40f1b770c80ccc63a3196e5afe9a07c2b9fcf58ab62dd6f5203776ed0b092b6a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723767617999374987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e2f8e7fbaff5af59aa1d84a51d2105,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad667bf2761ce3e9238754390108ecca9e077f669db79cded231f7a92ca55f0c,PodSandboxId:221eeb083795536db22980e584e1cf6a8fa7bc10a6124faffe24ab92b38b2679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723767618010249145,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c625a9ebc8379258593aed874d21eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1d30f907c2154716175aedf0d6697d255dbc797688819af864d0a270dbe7b4,PodSandboxId:b09daa04803ec8db34bd59317ea0ed3abb0ed9edf39f85cb709f3bfe4d153e88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723767612356382902,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddeb173c72c9e7a28ee42c238eba565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:084f91112d945526de57d9a9e2058ea5d56c387418461c1f704bccb82d4a95c5,PodSandboxId:88bba1c7c50ad4e53fc3ab521543327b8ce5485b63bdcdb54c8a0c091c65b8f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723767612350032665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e2f8e7fbaff5af59aa1d84a51d2105,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7309973a380c71cff9639b67ab8d02337f5f2fb85e202e4b2515b7f6eaab5f58,PodSandboxId:10393b8670ca7082760d969594e63df87545c902db3bff8e958744622c9d31f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723767612306559789,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b634b82a8d616cbeb7ab54d1503e680,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ccf6732d9bfb6a2b7d01fa2d511ea94e929a654ba2098edefae87b50704b4,PodSandboxId:f04814c7737e4d784fe8f1ae281be2c487180c46272bd9233cba243b12e4ae59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723767612279853403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c625a9ebc8379258593aed874d21eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d765069-d16a-4630-a8b3-eb4394f81d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.133073386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12d9d7f9-d145-4123-8d17-630551aa1942 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.133156963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12d9d7f9-d145-4123-8d17-630551aa1942 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.135084502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4309f782-6ba9-4f93-9fa4-7dc139ed1f94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.135680988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723767625135648707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4309f782-6ba9-4f93-9fa4-7dc139ed1f94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.136578427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca9755f6-f8c2-4f8c-9125-ebc21e6b2b47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.136661942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca9755f6-f8c2-4f8c-9125-ebc21e6b2b47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:20:25 kubernetes-upgrade-165951 crio[1878]: time="2024-08-16 00:20:25.137048055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7f05d0e0ab33c834399e529c3b71a160ff0d7f9e1f916d48e212c197b6f22c9,PodSandboxId:07dd12d8a1ac799f078273ebc5613aba2b20cdad08f1f51de188b765eb40ff5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723767618015563484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddeb173c72c9e7a28ee42c238eba565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:004e4584a2c3b05123e7404fa039f90eccfdd626d50c3d11c3c40ca81c6f7249,PodSandboxId:00e8d6fba91e94a56a80a19a640db40b8e476c11306877b774ea10a042755097,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723767618017637313,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b634b82a8d616cbeb7ab54d1503e680,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbe4389b3425f30178170c19d9e988d58cbb7c4f1b51e056146ec3469c88c5e,PodSandboxId:40f1b770c80ccc63a3196e5afe9a07c2b9fcf58ab62dd6f5203776ed0b092b6a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723767617999374987,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e2f8e7fbaff5af59aa1d84a51d2105,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad667bf2761ce3e9238754390108ecca9e077f669db79cded231f7a92ca55f0c,PodSandboxId:221eeb083795536db22980e584e1cf6a8fa7bc10a6124faffe24ab92b38b2679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723767618010249145,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c625a9ebc8379258593aed874d21eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1d30f907c2154716175aedf0d6697d255dbc797688819af864d0a270dbe7b4,PodSandboxId:b09daa04803ec8db34bd59317ea0ed3abb0ed9edf39f85cb709f3bfe4d153e88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723767612356382902,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ddeb173c72c9e7a28ee42c238eba565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:084f91112d945526de57d9a9e2058ea5d56c387418461c1f704bccb82d4a95c5,PodSandboxId:88bba1c7c50ad4e53fc3ab521543327b8ce5485b63bdcdb54c8a0c091c65b8f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723767612350032665,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e2f8e7fbaff5af59aa1d84a51d2105,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7309973a380c71cff9639b67ab8d02337f5f2fb85e202e4b2515b7f6eaab5f58,PodSandboxId:10393b8670ca7082760d969594e63df87545c902db3bff8e958744622c9d31f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723767612306559789,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b634b82a8d616cbeb7ab54d1503e680,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325ccf6732d9bfb6a2b7d01fa2d511ea94e929a654ba2098edefae87b50704b4,PodSandboxId:f04814c7737e4d784fe8f1ae281be2c487180c46272bd9233cba243b12e4ae59,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723767612279853403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-165951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c625a9ebc8379258593aed874d21eae,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca9755f6-f8c2-4f8c-9125-ebc21e6b2b47 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	004e4584a2c3b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago       Running             kube-apiserver            2                   00e8d6fba91e9       kube-apiserver-kubernetes-upgrade-165951
	c7f05d0e0ab33       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   7 seconds ago       Running             kube-scheduler            2                   07dd12d8a1ac7       kube-scheduler-kubernetes-upgrade-165951
	ad667bf2761ce       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   7 seconds ago       Running             kube-controller-manager   2                   221eeb0837955       kube-controller-manager-kubernetes-upgrade-165951
	3dbe4389b3425       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   40f1b770c80cc       etcd-kubernetes-upgrade-165951
	bf1d30f907c21       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   12 seconds ago      Exited              kube-scheduler            1                   b09daa04803ec       kube-scheduler-kubernetes-upgrade-165951
	084f91112d945       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   12 seconds ago      Exited              etcd                      1                   88bba1c7c50ad       etcd-kubernetes-upgrade-165951
	7309973a380c7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   12 seconds ago      Exited              kube-apiserver            1                   10393b8670ca7       kube-apiserver-kubernetes-upgrade-165951
	325ccf6732d9b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   12 seconds ago      Exited              kube-controller-manager   1                   f04814c7737e4       kube-controller-manager-kubernetes-upgrade-165951
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-165951
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-165951
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 00:20:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-165951
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:20:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 00:20:21 +0000   Fri, 16 Aug 2024 00:20:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 00:20:21 +0000   Fri, 16 Aug 2024 00:20:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 00:20:21 +0000   Fri, 16 Aug 2024 00:20:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 00:20:21 +0000   Fri, 16 Aug 2024 00:20:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.157
	  Hostname:    kubernetes-upgrade-165951
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ba44c3ffdca4f9dbfd89e8e9bcac8a5
	  System UUID:                5ba44c3f-fdca-4f9d-bfd8-9e8e9bcac8a5
	  Boot ID:                    40fad2f2-78c8-48ba-9a3e-1674d8c69d75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-165951              100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17s
	  kube-system                 kube-scheduler-kubernetes-upgrade-165951    100m (5%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                200m (10%)  0 (0%)
	  memory             100Mi (4%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 26s)  kubelet          Node kubernetes-upgrade-165951 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 26s)  kubelet          Node kubernetes-upgrade-165951 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 26s)  kubelet          Node kubernetes-upgrade-165951 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node kubernetes-upgrade-165951 event: Registered Node kubernetes-upgrade-165951 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-165951 event: Registered Node kubernetes-upgrade-165951 in Controller
	
	
	==> dmesg <==
	[  +2.708067] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.601450] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.844877] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.059525] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056633] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.183618] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.136740] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.304772] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +4.310995] systemd-fstab-generator[726]: Ignoring "noauto" option for root device
	[  +0.059422] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.950077] systemd-fstab-generator[848]: Ignoring "noauto" option for root device
	[Aug16 00:20] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	[  +0.087649] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.708827] systemd-fstab-generator[1792]: Ignoring "noauto" option for root device
	[  +0.210183] systemd-fstab-generator[1804]: Ignoring "noauto" option for root device
	[  +0.210888] systemd-fstab-generator[1821]: Ignoring "noauto" option for root device
	[  +0.196445] systemd-fstab-generator[1833]: Ignoring "noauto" option for root device
	[  +0.366473] systemd-fstab-generator[1861]: Ignoring "noauto" option for root device
	[  +0.967078] systemd-fstab-generator[2059]: Ignoring "noauto" option for root device
	[  +0.071105] kauditd_printk_skb: 201 callbacks suppressed
	[  +2.327546] systemd-fstab-generator[2321]: Ignoring "noauto" option for root device
	[  +5.875710] systemd-fstab-generator[2586]: Ignoring "noauto" option for root device
	[  +0.081153] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [084f91112d945526de57d9a9e2058ea5d56c387418461c1f704bccb82d4a95c5] <==
	{"level":"info","ts":"2024-08-16T00:20:12.771842Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-16T00:20:12.818623Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2d4154f8677556f0","local-member-id":"e97ba2b9037c192e","commit-index":317}
	{"level":"info","ts":"2024-08-16T00:20:12.820965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-16T00:20:12.822875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became follower at term 2"}
	{"level":"info","ts":"2024-08-16T00:20:12.823032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e97ba2b9037c192e [peers: [], term: 2, commit: 317, applied: 0, lastindex: 317, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-16T00:20:12.831967Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-16T00:20:12.885382Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":310}
	{"level":"info","ts":"2024-08-16T00:20:12.909885Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-16T00:20:12.920479Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e97ba2b9037c192e","timeout":"7s"}
	{"level":"info","ts":"2024-08-16T00:20:12.925088Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e97ba2b9037c192e"}
	{"level":"info","ts":"2024-08-16T00:20:12.926806Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"e97ba2b9037c192e","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-16T00:20:12.927367Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:20:12.932033Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-16T00:20:12.932250Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T00:20:12.932345Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T00:20:12.932358Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T00:20:12.932599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e switched to configuration voters=(16824219748483733806)"}
	{"level":"info","ts":"2024-08-16T00:20:12.932724Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2d4154f8677556f0","local-member-id":"e97ba2b9037c192e","added-peer-id":"e97ba2b9037c192e","added-peer-peer-urls":["https://192.168.72.157:2380"]}
	{"level":"info","ts":"2024-08-16T00:20:12.932948Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2d4154f8677556f0","local-member-id":"e97ba2b9037c192e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:20:12.933009Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:20:12.934701Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T00:20:12.943009Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e97ba2b9037c192e","initial-advertise-peer-urls":["https://192.168.72.157:2380"],"listen-peer-urls":["https://192.168.72.157:2380"],"advertise-client-urls":["https://192.168.72.157:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.157:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T00:20:12.946794Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T00:20:12.946979Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.157:2380"}
	{"level":"info","ts":"2024-08-16T00:20:12.947009Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.157:2380"}
	
	
	==> etcd [3dbe4389b3425f30178170c19d9e988d58cbb7c4f1b51e056146ec3469c88c5e] <==
	{"level":"info","ts":"2024-08-16T00:20:18.430525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e switched to configuration voters=(16824219748483733806)"}
	{"level":"info","ts":"2024-08-16T00:20:18.430601Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2d4154f8677556f0","local-member-id":"e97ba2b9037c192e","added-peer-id":"e97ba2b9037c192e","added-peer-peer-urls":["https://192.168.72.157:2380"]}
	{"level":"info","ts":"2024-08-16T00:20:18.430706Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2d4154f8677556f0","local-member-id":"e97ba2b9037c192e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:20:18.430833Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:20:18.435049Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T00:20:18.437848Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e97ba2b9037c192e","initial-advertise-peer-urls":["https://192.168.72.157:2380"],"listen-peer-urls":["https://192.168.72.157:2380"],"advertise-client-urls":["https://192.168.72.157:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.157:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T00:20:18.437946Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T00:20:18.438164Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.157:2380"}
	{"level":"info","ts":"2024-08-16T00:20:18.438205Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.157:2380"}
	{"level":"info","ts":"2024-08-16T00:20:20.097229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T00:20:20.097360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T00:20:20.097432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e received MsgPreVoteResp from e97ba2b9037c192e at term 2"}
	{"level":"info","ts":"2024-08-16T00:20:20.097467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T00:20:20.097492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e received MsgVoteResp from e97ba2b9037c192e at term 3"}
	{"level":"info","ts":"2024-08-16T00:20:20.097519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became leader at term 3"}
	{"level":"info","ts":"2024-08-16T00:20:20.097553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e97ba2b9037c192e elected leader e97ba2b9037c192e at term 3"}
	{"level":"info","ts":"2024-08-16T00:20:20.103060Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e97ba2b9037c192e","local-member-attributes":"{Name:kubernetes-upgrade-165951 ClientURLs:[https://192.168.72.157:2379]}","request-path":"/0/members/e97ba2b9037c192e/attributes","cluster-id":"2d4154f8677556f0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T00:20:20.103159Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:20:20.103183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:20:20.103794Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T00:20:20.103844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T00:20:20.104870Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:20:20.104891Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:20:20.106174Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.157:2379"}
	{"level":"info","ts":"2024-08-16T00:20:20.106958Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:20:25 up 0 min,  0 users,  load average: 1.60, 0.42, 0.14
	Linux kubernetes-upgrade-165951 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [004e4584a2c3b05123e7404fa039f90eccfdd626d50c3d11c3c40ca81c6f7249] <==
	I0816 00:20:21.516326       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 00:20:21.516330       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 00:20:21.516333       1 cache.go:39] Caches are synced for autoregister controller
	I0816 00:20:21.545521       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 00:20:21.555093       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 00:20:21.555175       1 policy_source.go:224] refreshing policies
	I0816 00:20:21.564705       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 00:20:21.564780       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 00:20:21.564968       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 00:20:21.565714       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 00:20:21.565857       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 00:20:21.565990       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 00:20:21.571921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 00:20:21.575921       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0816 00:20:21.581097       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0816 00:20:22.371352       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 00:20:22.825979       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 00:20:22.839613       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 00:20:22.885997       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 00:20:23.010437       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 00:20:23.022986       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 00:20:24.802168       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 00:20:25.249127       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 00:20:25.400133       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0816 00:20:25.511778       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [7309973a380c71cff9639b67ab8d02337f5f2fb85e202e4b2515b7f6eaab5f58] <==
	I0816 00:20:12.786534       1 options.go:228] external host was not specified, using 192.168.72.157
	I0816 00:20:12.794686       1 server.go:142] Version: v1.31.0
	I0816 00:20:12.799903       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:20:13.412665       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0816 00:20:13.461345       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0816 00:20:13.461387       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0816 00:20:13.464020       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 00:20:13.464210       1 instance.go:232] Using reconciler: lease
	W0816 00:20:14.064617       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:50666->127.0.0.1:2379: read: connection reset by peer"
	W0816 00:20:14.064704       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:50652->127.0.0.1:2379: read: connection reset by peer"
	W0816 00:20:14.064838       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:50660->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-controller-manager [325ccf6732d9bfb6a2b7d01fa2d511ea94e929a654ba2098edefae87b50704b4] <==
	
	
	==> kube-controller-manager [ad667bf2761ce3e9238754390108ecca9e077f669db79cded231f7a92ca55f0c] <==
	I0816 00:20:24.835996       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0816 00:20:24.836019       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0816 00:20:24.836024       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0816 00:20:24.836029       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0816 00:20:24.836119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-165951"
	I0816 00:20:24.839614       1 shared_informer.go:320] Caches are synced for taint
	I0816 00:20:24.839719       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0816 00:20:24.839827       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-165951"
	I0816 00:20:24.839857       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0816 00:20:24.846696       1 shared_informer.go:320] Caches are synced for crt configmap
	I0816 00:20:24.896857       1 shared_informer.go:320] Caches are synced for deployment
	I0816 00:20:24.909248       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 00:20:24.945618       1 shared_informer.go:320] Caches are synced for disruption
	I0816 00:20:24.945678       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0816 00:20:24.954417       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 00:20:24.993555       1 shared_informer.go:320] Caches are synced for attach detach
	I0816 00:20:25.005624       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0816 00:20:25.367359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-165951"
	I0816 00:20:25.467607       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 00:20:25.495181       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 00:20:25.495258       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 00:20:25.662790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="135.107678ms"
	I0816 00:20:25.750114       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="87.165491ms"
	I0816 00:20:25.808317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="58.160133ms"
	I0816 00:20:25.808453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="74.395µs"
	
	
	==> kube-scheduler [bf1d30f907c2154716175aedf0d6697d255dbc797688819af864d0a270dbe7b4] <==
	
	
	==> kube-scheduler [c7f05d0e0ab33c834399e529c3b71a160ff0d7f9e1f916d48e212c197b6f22c9] <==
	I0816 00:20:19.005157       1 serving.go:386] Generated self-signed cert in-memory
	W0816 00:20:21.442932       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 00:20:21.442976       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 00:20:21.443039       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 00:20:21.443045       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 00:20:21.484278       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 00:20:21.485326       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:20:21.487646       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 00:20:21.487693       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 00:20:21.490170       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 00:20:21.491181       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 00:20:21.588690       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 00:20:21 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:21.622199    2328 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 16 00:20:21 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:21.622300    2328 projected.go:194] Error preparing data for projected volume kube-api-access-xvm5h for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 16 00:20:21 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:21.622462    2328 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd383e7c-ea10-4d76-aa19-a382029dedc1-kube-api-access-xvm5h podName:fd383e7c-ea10-4d76-aa19-a382029dedc1 nodeName:}" failed. No retries permitted until 2024-08-16 00:20:22.122389427 +0000 UTC m=+4.765514804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xvm5h" (UniqueName: "kubernetes.io/projected/fd383e7c-ea10-4d76-aa19-a382029dedc1-kube-api-access-xvm5h") pod "storage-provisioner" (UID: "fd383e7c-ea10-4d76-aa19-a382029dedc1") : configmap "kube-root-ca.crt" not found
	Aug 16 00:20:21 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:21.641363    2328 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-165951"
	Aug 16 00:20:21 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:21.641522    2328 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-165951"
	Aug 16 00:20:21 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:21.641576    2328 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 00:20:21 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:21.642855    2328 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 00:20:22 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:22.216865    2328 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 16 00:20:22 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:22.217029    2328 projected.go:194] Error preparing data for projected volume kube-api-access-xvm5h for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 16 00:20:22 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:22.217190    2328 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd383e7c-ea10-4d76-aa19-a382029dedc1-kube-api-access-xvm5h podName:fd383e7c-ea10-4d76-aa19-a382029dedc1 nodeName:}" failed. No retries permitted until 2024-08-16 00:20:23.217154754 +0000 UTC m=+5.860280139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvm5h" (UniqueName: "kubernetes.io/projected/fd383e7c-ea10-4d76-aa19-a382029dedc1-kube-api-access-xvm5h") pod "storage-provisioner" (UID: "fd383e7c-ea10-4d76-aa19-a382029dedc1") : configmap "kube-root-ca.crt" not found
	Aug 16 00:20:23 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:23.224333    2328 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 16 00:20:23 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:23.224393    2328 projected.go:194] Error preparing data for projected volume kube-api-access-xvm5h for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 16 00:20:23 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:23.224472    2328 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd383e7c-ea10-4d76-aa19-a382029dedc1-kube-api-access-xvm5h podName:fd383e7c-ea10-4d76-aa19-a382029dedc1 nodeName:}" failed. No retries permitted until 2024-08-16 00:20:25.224449861 +0000 UTC m=+7.867575240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvm5h" (UniqueName: "kubernetes.io/projected/fd383e7c-ea10-4d76-aa19-a382029dedc1-kube-api-access-xvm5h") pod "storage-provisioner" (UID: "fd383e7c-ea10-4d76-aa19-a382029dedc1") : configmap "kube-root-ca.crt" not found
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:25.240530    2328 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:25.240559    2328 projected.go:194] Error preparing data for projected volume kube-api-access-xvm5h for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: E0816 00:20:25.240607    2328 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd383e7c-ea10-4d76-aa19-a382029dedc1-kube-api-access-xvm5h podName:fd383e7c-ea10-4d76-aa19-a382029dedc1 nodeName:}" failed. No retries permitted until 2024-08-16 00:20:29.240591166 +0000 UTC m=+11.883716543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvm5h" (UniqueName: "kubernetes.io/projected/fd383e7c-ea10-4d76-aa19-a382029dedc1-kube-api-access-xvm5h") pod "storage-provisioner" (UID: "fd383e7c-ea10-4d76-aa19-a382029dedc1") : configmap "kube-root-ca.crt" not found
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.543122    2328 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55dp\" (UniqueName: \"kubernetes.io/projected/67523cad-4de9-4964-81a2-ca7158550d49-kube-api-access-b55dp\") pod \"kube-proxy-g2r82\" (UID: \"67523cad-4de9-4964-81a2-ca7158550d49\") " pod="kube-system/kube-proxy-g2r82"
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.543206    2328 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67523cad-4de9-4964-81a2-ca7158550d49-lib-modules\") pod \"kube-proxy-g2r82\" (UID: \"67523cad-4de9-4964-81a2-ca7158550d49\") " pod="kube-system/kube-proxy-g2r82"
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.543234    2328 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67523cad-4de9-4964-81a2-ca7158550d49-kube-proxy\") pod \"kube-proxy-g2r82\" (UID: \"67523cad-4de9-4964-81a2-ca7158550d49\") " pod="kube-system/kube-proxy-g2r82"
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.543255    2328 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67523cad-4de9-4964-81a2-ca7158550d49-xtables-lock\") pod \"kube-proxy-g2r82\" (UID: \"67523cad-4de9-4964-81a2-ca7158550d49\") " pod="kube-system/kube-proxy-g2r82"
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.695554    2328 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.744281    2328 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5acfc6d1-0973-4bb1-b17a-f81f1fb5e024-config-volume\") pod \"coredns-6f6b679f8f-d25kw\" (UID: \"5acfc6d1-0973-4bb1-b17a-f81f1fb5e024\") " pod="kube-system/coredns-6f6b679f8f-d25kw"
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.744397    2328 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dd14ecf-4e7a-495a-bc5c-b34e0a6db77a-config-volume\") pod \"coredns-6f6b679f8f-kq54t\" (UID: \"4dd14ecf-4e7a-495a-bc5c-b34e0a6db77a\") " pod="kube-system/coredns-6f6b679f8f-kq54t"
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.744420    2328 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc42b\" (UniqueName: \"kubernetes.io/projected/4dd14ecf-4e7a-495a-bc5c-b34e0a6db77a-kube-api-access-jc42b\") pod \"coredns-6f6b679f8f-kq54t\" (UID: \"4dd14ecf-4e7a-495a-bc5c-b34e0a6db77a\") " pod="kube-system/coredns-6f6b679f8f-kq54t"
	Aug 16 00:20:25 kubernetes-upgrade-165951 kubelet[2328]: I0816 00:20:25.744449    2328 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p6d4\" (UniqueName: \"kubernetes.io/projected/5acfc6d1-0973-4bb1-b17a-f81f1fb5e024-kube-api-access-4p6d4\") pod \"coredns-6f6b679f8f-d25kw\" (UID: \"5acfc6d1-0973-4bb1-b17a-f81f1fb5e024\") " pod="kube-system/coredns-6f6b679f8f-d25kw"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:20:24.533330   63780 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19452-12919/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-165951 -n kubernetes-upgrade-165951
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-165951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-6f6b679f8f-d25kw coredns-6f6b679f8f-kq54t storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-165951 describe pod coredns-6f6b679f8f-d25kw coredns-6f6b679f8f-kq54t storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-165951 describe pod coredns-6f6b679f8f-d25kw coredns-6f6b679f8f-kq54t storage-provisioner: exit status 1 (99.113375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6f6b679f8f-d25kw" not found
	Error from server (NotFound): pods "coredns-6f6b679f8f-kq54t" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-165951 describe pod coredns-6f6b679f8f-d25kw coredns-6f6b679f8f-kq54t storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-165951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-165951
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-165951: (1.219930759s)
--- FAIL: TestKubernetesUpgrade (376.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (300.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-098619 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-098619 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m0.013800983s)

                                                
                                                
-- stdout --
	* [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 00:22:56.199213   71480 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:22:56.199503   71480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:22:56.199514   71480 out.go:358] Setting ErrFile to fd 2...
	I0816 00:22:56.199519   71480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:22:56.199758   71480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:22:56.200374   71480 out.go:352] Setting JSON to false
	I0816 00:22:56.201456   71480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7476,"bootTime":1723760300,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:22:56.201519   71480 start.go:139] virtualization: kvm guest
	I0816 00:22:56.203753   71480 out.go:177] * [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:22:56.205229   71480 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:22:56.205288   71480 notify.go:220] Checking for updates...
	I0816 00:22:56.207881   71480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:22:56.209205   71480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:22:56.210440   71480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:22:56.211702   71480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:22:56.212966   71480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:22:56.214689   71480 config.go:182] Loaded profile config "bridge-697641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:22:56.214808   71480 config.go:182] Loaded profile config "enable-default-cni-697641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:22:56.214910   71480 config.go:182] Loaded profile config "flannel-697641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:22:56.215033   71480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:22:56.252808   71480 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 00:22:56.254022   71480 start.go:297] selected driver: kvm2
	I0816 00:22:56.254042   71480 start.go:901] validating driver "kvm2" against <nil>
	I0816 00:22:56.254054   71480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:22:56.254718   71480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:22:56.254822   71480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:22:56.271090   71480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:22:56.271143   71480 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 00:22:56.271333   71480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:22:56.271363   71480 cni.go:84] Creating CNI manager for ""
	I0816 00:22:56.271371   71480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:22:56.271378   71480 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 00:22:56.271422   71480 start.go:340] cluster config:
	{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:22:56.271523   71480 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:22:56.273240   71480 out.go:177] * Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	I0816 00:22:56.274611   71480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:22:56.274643   71480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:22:56.274652   71480 cache.go:56] Caching tarball of preloaded images
	I0816 00:22:56.274738   71480 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:22:56.274752   71480 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:22:56.274872   71480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:22:56.274897   71480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json: {Name:mk15722e10aaee04cf64bdedd88f93a3f1d79b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:22:56.275098   71480 start.go:360] acquireMachinesLock for old-k8s-version-098619: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:23:23.515403   71480 start.go:364] duration metric: took 27.24027423s to acquireMachinesLock for "old-k8s-version-098619"
	I0816 00:23:23.515510   71480 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:23:23.515626   71480 start.go:125] createHost starting for "" (driver="kvm2")
	I0816 00:23:23.517222   71480 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 00:23:23.517393   71480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:23:23.517441   71480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:23:23.535437   71480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I0816 00:23:23.535945   71480 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:23:23.536539   71480 main.go:141] libmachine: Using API Version  1
	I0816 00:23:23.536565   71480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:23:23.536948   71480 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:23:23.537175   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:23:23.537367   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:23:23.537522   71480 start.go:159] libmachine.API.Create for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:23:23.537553   71480 client.go:168] LocalClient.Create starting
	I0816 00:23:23.537591   71480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem
	I0816 00:23:23.537633   71480 main.go:141] libmachine: Decoding PEM data...
	I0816 00:23:23.537655   71480 main.go:141] libmachine: Parsing certificate...
	I0816 00:23:23.537736   71480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem
	I0816 00:23:23.537762   71480 main.go:141] libmachine: Decoding PEM data...
	I0816 00:23:23.537781   71480 main.go:141] libmachine: Parsing certificate...
	I0816 00:23:23.537804   71480 main.go:141] libmachine: Running pre-create checks...
	I0816 00:23:23.537818   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .PreCreateCheck
	I0816 00:23:23.538187   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:23:23.538571   71480 main.go:141] libmachine: Creating machine...
	I0816 00:23:23.538584   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .Create
	I0816 00:23:23.538732   71480 main.go:141] libmachine: (old-k8s-version-098619) Creating KVM machine...
	I0816 00:23:23.540174   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found existing default KVM network
	I0816 00:23:23.541831   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:23.541643   71796 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d5:29:43} reservation:<nil>}
	I0816 00:23:23.543063   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:23.542973   71796 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:56:29} reservation:<nil>}
	I0816 00:23:23.544024   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:23.543889   71796 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:d9:ac:8a} reservation:<nil>}
	I0816 00:23:23.545321   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:23.545211   71796 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5960}
	I0816 00:23:23.545350   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | created network xml: 
	I0816 00:23:23.545370   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | <network>
	I0816 00:23:23.545379   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |   <name>mk-old-k8s-version-098619</name>
	I0816 00:23:23.545388   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |   <dns enable='no'/>
	I0816 00:23:23.545394   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |   
	I0816 00:23:23.545404   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0816 00:23:23.545416   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |     <dhcp>
	I0816 00:23:23.545426   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0816 00:23:23.545439   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |     </dhcp>
	I0816 00:23:23.545448   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |   </ip>
	I0816 00:23:23.545455   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG |   
	I0816 00:23:23.545464   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | </network>
	I0816 00:23:23.545471   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | 
	I0816 00:23:23.551310   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | trying to create private KVM network mk-old-k8s-version-098619 192.168.72.0/24...
	I0816 00:23:23.636886   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | private KVM network mk-old-k8s-version-098619 192.168.72.0/24 created
	I0816 00:23:23.636919   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:23.636871   71796 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:23:23.636931   71480 main.go:141] libmachine: (old-k8s-version-098619) Setting up store path in /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619 ...
	I0816 00:23:23.636949   71480 main.go:141] libmachine: (old-k8s-version-098619) Building disk image from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0816 00:23:23.637015   71480 main.go:141] libmachine: (old-k8s-version-098619) Downloading /home/jenkins/minikube-integration/19452-12919/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0816 00:23:23.918859   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:23.918722   71796 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa...
	I0816 00:23:24.041986   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:24.041837   71796 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/old-k8s-version-098619.rawdisk...
	I0816 00:23:24.042024   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Writing magic tar header
	I0816 00:23:24.042045   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Writing SSH key tar header
	I0816 00:23:24.042064   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:24.042018   71796 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619 ...
	I0816 00:23:24.042195   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619
	I0816 00:23:24.042214   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube/machines
	I0816 00:23:24.042237   71480 main.go:141] libmachine: (old-k8s-version-098619) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619 (perms=drwx------)
	I0816 00:23:24.042250   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:23:24.042288   71480 main.go:141] libmachine: (old-k8s-version-098619) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube/machines (perms=drwxr-xr-x)
	I0816 00:23:24.042314   71480 main.go:141] libmachine: (old-k8s-version-098619) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919/.minikube (perms=drwxr-xr-x)
	I0816 00:23:24.042339   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19452-12919
	I0816 00:23:24.042355   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 00:23:24.042367   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Checking permissions on dir: /home/jenkins
	I0816 00:23:24.042380   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Checking permissions on dir: /home
	I0816 00:23:24.042391   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Skipping /home - not owner
	I0816 00:23:24.042403   71480 main.go:141] libmachine: (old-k8s-version-098619) Setting executable bit set on /home/jenkins/minikube-integration/19452-12919 (perms=drwxrwxr-x)
	I0816 00:23:24.042418   71480 main.go:141] libmachine: (old-k8s-version-098619) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0816 00:23:24.042430   71480 main.go:141] libmachine: (old-k8s-version-098619) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 00:23:24.042442   71480 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:23:24.043652   71480 main.go:141] libmachine: (old-k8s-version-098619) define libvirt domain using xml: 
	I0816 00:23:24.043681   71480 main.go:141] libmachine: (old-k8s-version-098619) <domain type='kvm'>
	I0816 00:23:24.043692   71480 main.go:141] libmachine: (old-k8s-version-098619)   <name>old-k8s-version-098619</name>
	I0816 00:23:24.043701   71480 main.go:141] libmachine: (old-k8s-version-098619)   <memory unit='MiB'>2200</memory>
	I0816 00:23:24.043711   71480 main.go:141] libmachine: (old-k8s-version-098619)   <vcpu>2</vcpu>
	I0816 00:23:24.043721   71480 main.go:141] libmachine: (old-k8s-version-098619)   <features>
	I0816 00:23:24.043731   71480 main.go:141] libmachine: (old-k8s-version-098619)     <acpi/>
	I0816 00:23:24.043742   71480 main.go:141] libmachine: (old-k8s-version-098619)     <apic/>
	I0816 00:23:24.043751   71480 main.go:141] libmachine: (old-k8s-version-098619)     <pae/>
	I0816 00:23:24.043767   71480 main.go:141] libmachine: (old-k8s-version-098619)     
	I0816 00:23:24.043779   71480 main.go:141] libmachine: (old-k8s-version-098619)   </features>
	I0816 00:23:24.043788   71480 main.go:141] libmachine: (old-k8s-version-098619)   <cpu mode='host-passthrough'>
	I0816 00:23:24.043800   71480 main.go:141] libmachine: (old-k8s-version-098619)   
	I0816 00:23:24.043809   71480 main.go:141] libmachine: (old-k8s-version-098619)   </cpu>
	I0816 00:23:24.043821   71480 main.go:141] libmachine: (old-k8s-version-098619)   <os>
	I0816 00:23:24.043830   71480 main.go:141] libmachine: (old-k8s-version-098619)     <type>hvm</type>
	I0816 00:23:24.043843   71480 main.go:141] libmachine: (old-k8s-version-098619)     <boot dev='cdrom'/>
	I0816 00:23:24.043858   71480 main.go:141] libmachine: (old-k8s-version-098619)     <boot dev='hd'/>
	I0816 00:23:24.043869   71480 main.go:141] libmachine: (old-k8s-version-098619)     <bootmenu enable='no'/>
	I0816 00:23:24.043878   71480 main.go:141] libmachine: (old-k8s-version-098619)   </os>
	I0816 00:23:24.043887   71480 main.go:141] libmachine: (old-k8s-version-098619)   <devices>
	I0816 00:23:24.043898   71480 main.go:141] libmachine: (old-k8s-version-098619)     <disk type='file' device='cdrom'>
	I0816 00:23:24.043916   71480 main.go:141] libmachine: (old-k8s-version-098619)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/boot2docker.iso'/>
	I0816 00:23:24.043940   71480 main.go:141] libmachine: (old-k8s-version-098619)       <target dev='hdc' bus='scsi'/>
	I0816 00:23:24.043949   71480 main.go:141] libmachine: (old-k8s-version-098619)       <readonly/>
	I0816 00:23:24.043956   71480 main.go:141] libmachine: (old-k8s-version-098619)     </disk>
	I0816 00:23:24.043967   71480 main.go:141] libmachine: (old-k8s-version-098619)     <disk type='file' device='disk'>
	I0816 00:23:24.043978   71480 main.go:141] libmachine: (old-k8s-version-098619)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0816 00:23:24.044000   71480 main.go:141] libmachine: (old-k8s-version-098619)       <source file='/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/old-k8s-version-098619.rawdisk'/>
	I0816 00:23:24.044008   71480 main.go:141] libmachine: (old-k8s-version-098619)       <target dev='hda' bus='virtio'/>
	I0816 00:23:24.044017   71480 main.go:141] libmachine: (old-k8s-version-098619)     </disk>
	I0816 00:23:24.044029   71480 main.go:141] libmachine: (old-k8s-version-098619)     <interface type='network'>
	I0816 00:23:24.044040   71480 main.go:141] libmachine: (old-k8s-version-098619)       <source network='mk-old-k8s-version-098619'/>
	I0816 00:23:24.044048   71480 main.go:141] libmachine: (old-k8s-version-098619)       <model type='virtio'/>
	I0816 00:23:24.044066   71480 main.go:141] libmachine: (old-k8s-version-098619)     </interface>
	I0816 00:23:24.044075   71480 main.go:141] libmachine: (old-k8s-version-098619)     <interface type='network'>
	I0816 00:23:24.044085   71480 main.go:141] libmachine: (old-k8s-version-098619)       <source network='default'/>
	I0816 00:23:24.044093   71480 main.go:141] libmachine: (old-k8s-version-098619)       <model type='virtio'/>
	I0816 00:23:24.044101   71480 main.go:141] libmachine: (old-k8s-version-098619)     </interface>
	I0816 00:23:24.044112   71480 main.go:141] libmachine: (old-k8s-version-098619)     <serial type='pty'>
	I0816 00:23:24.044121   71480 main.go:141] libmachine: (old-k8s-version-098619)       <target port='0'/>
	I0816 00:23:24.044128   71480 main.go:141] libmachine: (old-k8s-version-098619)     </serial>
	I0816 00:23:24.044137   71480 main.go:141] libmachine: (old-k8s-version-098619)     <console type='pty'>
	I0816 00:23:24.044145   71480 main.go:141] libmachine: (old-k8s-version-098619)       <target type='serial' port='0'/>
	I0816 00:23:24.044154   71480 main.go:141] libmachine: (old-k8s-version-098619)     </console>
	I0816 00:23:24.044161   71480 main.go:141] libmachine: (old-k8s-version-098619)     <rng model='virtio'>
	I0816 00:23:24.044175   71480 main.go:141] libmachine: (old-k8s-version-098619)       <backend model='random'>/dev/random</backend>
	I0816 00:23:24.044187   71480 main.go:141] libmachine: (old-k8s-version-098619)     </rng>
	I0816 00:23:24.044197   71480 main.go:141] libmachine: (old-k8s-version-098619)     
	I0816 00:23:24.044203   71480 main.go:141] libmachine: (old-k8s-version-098619)     
	I0816 00:23:24.044211   71480 main.go:141] libmachine: (old-k8s-version-098619)   </devices>
	I0816 00:23:24.044219   71480 main.go:141] libmachine: (old-k8s-version-098619) </domain>
	I0816 00:23:24.044230   71480 main.go:141] libmachine: (old-k8s-version-098619) 
	I0816 00:23:24.053017   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:0a:38:42 in network default
	I0816 00:23:24.053967   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:24.053994   71480 main.go:141] libmachine: (old-k8s-version-098619) Ensuring networks are active...
	I0816 00:23:24.054986   71480 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network default is active
	I0816 00:23:24.055429   71480 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network mk-old-k8s-version-098619 is active
	I0816 00:23:24.056131   71480 main.go:141] libmachine: (old-k8s-version-098619) Getting domain xml...
	I0816 00:23:24.056912   71480 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:23:25.676600   71480 main.go:141] libmachine: (old-k8s-version-098619) Waiting to get IP...
	I0816 00:23:25.677984   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:25.679252   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:25.679306   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:25.679246   71796 retry.go:31] will retry after 296.794667ms: waiting for machine to come up
	I0816 00:23:25.978091   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:25.978918   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:25.978939   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:25.978835   71796 retry.go:31] will retry after 351.519133ms: waiting for machine to come up
	I0816 00:23:26.332572   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:26.333237   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:26.333265   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:26.333185   71796 retry.go:31] will retry after 441.815388ms: waiting for machine to come up
	I0816 00:23:26.776666   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:26.777292   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:26.777319   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:26.777269   71796 retry.go:31] will retry after 418.337983ms: waiting for machine to come up
	I0816 00:23:27.197081   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:27.197629   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:27.197658   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:27.197588   71796 retry.go:31] will retry after 656.191501ms: waiting for machine to come up
	I0816 00:23:27.855235   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:27.856006   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:27.856031   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:27.855954   71796 retry.go:31] will retry after 786.275423ms: waiting for machine to come up
	I0816 00:23:28.644465   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:28.644997   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:28.645028   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:28.644958   71796 retry.go:31] will retry after 762.849122ms: waiting for machine to come up
	I0816 00:23:29.410027   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:29.410451   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:29.410480   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:29.410441   71796 retry.go:31] will retry after 1.225682467s: waiting for machine to come up
	I0816 00:23:30.637615   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:30.638245   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:30.638273   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:30.638191   71796 retry.go:31] will retry after 1.549917623s: waiting for machine to come up
	I0816 00:23:32.189669   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:32.190373   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:32.190403   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:32.190323   71796 retry.go:31] will retry after 1.431030801s: waiting for machine to come up
	I0816 00:23:33.622629   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:33.623202   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:33.623247   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:33.623152   71796 retry.go:31] will retry after 2.805434197s: waiting for machine to come up
	I0816 00:23:36.431523   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:36.432093   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:36.432112   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:36.432045   71796 retry.go:31] will retry after 3.545659106s: waiting for machine to come up
	I0816 00:23:39.979383   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:39.980008   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:39.980036   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:39.979946   71796 retry.go:31] will retry after 3.787927209s: waiting for machine to come up
	I0816 00:23:43.769441   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:43.769883   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:23:43.769912   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:23:43.769824   71796 retry.go:31] will retry after 4.820709049s: waiting for machine to come up
	I0816 00:23:48.591999   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:48.592591   71480 main.go:141] libmachine: (old-k8s-version-098619) Found IP for machine: 192.168.72.137
	I0816 00:23:48.592614   71480 main.go:141] libmachine: (old-k8s-version-098619) Reserving static IP address...
	I0816 00:23:48.592636   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has current primary IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:48.592979   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"} in network mk-old-k8s-version-098619
	I0816 00:23:48.677365   71480 main.go:141] libmachine: (old-k8s-version-098619) Reserved static IP address: 192.168.72.137
	I0816 00:23:48.677397   71480 main.go:141] libmachine: (old-k8s-version-098619) Waiting for SSH to be available...
	I0816 00:23:48.677414   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Getting to WaitForSSH function...
	I0816 00:23:48.680512   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:48.680884   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:minikube Clientid:01:52:54:00:22:73:72}
	I0816 00:23:48.680910   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:48.681100   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH client type: external
	I0816 00:23:48.681127   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa (-rw-------)
	I0816 00:23:48.681176   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:23:48.681188   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | About to run SSH command:
	I0816 00:23:48.681228   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | exit 0
	I0816 00:23:48.814310   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | SSH cmd err, output: <nil>: 
	I0816 00:23:48.814633   71480 main.go:141] libmachine: (old-k8s-version-098619) KVM machine creation complete!
	I0816 00:23:48.814934   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:23:48.815536   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:23:48.815720   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:23:48.815893   71480 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 00:23:48.815906   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetState
	I0816 00:23:48.817653   71480 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 00:23:48.817667   71480 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 00:23:48.817676   71480 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 00:23:48.817685   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:48.820659   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:48.821114   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:48.821143   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:48.821328   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:48.821561   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:48.821719   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:48.821861   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:48.822053   71480 main.go:141] libmachine: Using SSH client type: native
	I0816 00:23:48.822245   71480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:23:48.822257   71480 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 00:23:48.933360   71480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:23:48.933396   71480 main.go:141] libmachine: Detecting the provisioner...
	I0816 00:23:48.933406   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:48.936497   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:48.936891   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:48.936927   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:48.937110   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:48.937337   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:48.937494   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:48.937673   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:48.937817   71480 main.go:141] libmachine: Using SSH client type: native
	I0816 00:23:48.938015   71480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:23:48.938027   71480 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 00:23:49.052126   71480 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 00:23:49.052208   71480 main.go:141] libmachine: found compatible host: buildroot
	I0816 00:23:49.052218   71480 main.go:141] libmachine: Provisioning with buildroot...
	I0816 00:23:49.052226   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:23:49.052503   71480 buildroot.go:166] provisioning hostname "old-k8s-version-098619"
	I0816 00:23:49.052522   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:23:49.052725   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:49.055958   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.056403   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:49.056426   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.056636   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:49.056792   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:49.056922   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:49.057023   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:49.057163   71480 main.go:141] libmachine: Using SSH client type: native
	I0816 00:23:49.057387   71480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:23:49.057401   71480 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098619 && echo "old-k8s-version-098619" | sudo tee /etc/hostname
	I0816 00:23:49.194418   71480 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098619
	
	I0816 00:23:49.194453   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:49.197560   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.197992   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:49.198024   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.198218   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:49.198421   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:49.198603   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:49.198753   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:49.198926   71480 main.go:141] libmachine: Using SSH client type: native
	I0816 00:23:49.199096   71480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:23:49.199112   71480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098619/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:23:49.322121   71480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:23:49.322152   71480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:23:49.322187   71480 buildroot.go:174] setting up certificates
	I0816 00:23:49.322200   71480 provision.go:84] configureAuth start
	I0816 00:23:49.322212   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:23:49.323431   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:23:49.326189   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.326603   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:49.326636   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.326809   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:49.329213   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.329585   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:49.329622   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.329857   71480 provision.go:143] copyHostCerts
	I0816 00:23:49.329922   71480 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:23:49.329943   71480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:23:49.330023   71480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:23:49.330128   71480 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:23:49.330138   71480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:23:49.330169   71480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:23:49.330242   71480 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:23:49.330250   71480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:23:49.330300   71480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:23:49.330364   71480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098619 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-098619]
	I0816 00:23:49.485239   71480 provision.go:177] copyRemoteCerts
	I0816 00:23:49.485304   71480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:23:49.485331   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:49.488063   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.488416   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:49.488448   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.488622   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:49.488783   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:49.488924   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:49.489030   71480 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:23:49.576618   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:23:49.603055   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:23:49.629047   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 00:23:49.653432   71480 provision.go:87] duration metric: took 331.21913ms to configureAuth
	I0816 00:23:49.653462   71480 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:23:49.653651   71480 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:23:49.653719   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:49.656178   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.656508   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:49.656534   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.656759   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:49.656943   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:49.657103   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:49.657259   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:49.657442   71480 main.go:141] libmachine: Using SSH client type: native
	I0816 00:23:49.657618   71480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:23:49.657641   71480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:23:49.947145   71480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:23:49.947179   71480 main.go:141] libmachine: Checking connection to Docker...
	I0816 00:23:49.947189   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetURL
	I0816 00:23:49.948667   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using libvirt version 6000000
	I0816 00:23:49.951004   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.951372   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:49.951399   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.951609   71480 main.go:141] libmachine: Docker is up and running!
	I0816 00:23:49.951622   71480 main.go:141] libmachine: Reticulating splines...
	I0816 00:23:49.951628   71480 client.go:171] duration metric: took 26.414068353s to LocalClient.Create
	I0816 00:23:49.951648   71480 start.go:167] duration metric: took 26.414127226s to libmachine.API.Create "old-k8s-version-098619"
	I0816 00:23:49.951678   71480 start.go:293] postStartSetup for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:23:49.951687   71480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:23:49.951704   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:23:49.951942   71480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:23:49.951987   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:49.954147   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.954508   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:49.954544   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:49.954660   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:49.954863   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:49.955056   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:49.955187   71480 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:23:50.045050   71480 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:23:50.049576   71480 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:23:50.049609   71480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:23:50.049675   71480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:23:50.049761   71480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:23:50.049872   71480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:23:50.060176   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:23:50.086058   71480 start.go:296] duration metric: took 134.36836ms for postStartSetup
	I0816 00:23:50.086120   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:23:50.086767   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:23:50.089448   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.089889   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:50.089921   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.090180   71480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:23:50.090432   71480 start.go:128] duration metric: took 26.574790902s to createHost
	I0816 00:23:50.090466   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:50.092886   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.093221   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:50.093242   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.093443   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:50.093657   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:50.093835   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:50.094019   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:50.094189   71480 main.go:141] libmachine: Using SSH client type: native
	I0816 00:23:50.094398   71480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:23:50.094410   71480 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:23:50.207257   71480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723767830.159267902
	
	I0816 00:23:50.207285   71480 fix.go:216] guest clock: 1723767830.159267902
	I0816 00:23:50.207296   71480 fix.go:229] Guest: 2024-08-16 00:23:50.159267902 +0000 UTC Remote: 2024-08-16 00:23:50.090448873 +0000 UTC m=+53.934736366 (delta=68.819029ms)
	I0816 00:23:50.207325   71480 fix.go:200] guest clock delta is within tolerance: 68.819029ms
	I0816 00:23:50.207332   71480 start.go:83] releasing machines lock for "old-k8s-version-098619", held for 26.691888144s
	I0816 00:23:50.207357   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:23:50.207642   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:23:50.210905   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.211281   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:50.211305   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.211499   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:23:50.212094   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:23:50.212280   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:23:50.212429   71480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:23:50.212483   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:50.212556   71480 ssh_runner.go:195] Run: cat /version.json
	I0816 00:23:50.212578   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:23:50.215410   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.215727   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.215815   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:50.215842   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.215980   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:50.216152   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:50.216235   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:50.216256   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:50.216288   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:50.216412   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:23:50.216465   71480 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:23:50.216592   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:23:50.216737   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:23:50.216882   71480 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:23:50.322461   71480 ssh_runner.go:195] Run: systemctl --version
	I0816 00:23:50.329504   71480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:23:50.511586   71480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:23:50.518945   71480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:23:50.519025   71480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:23:50.537171   71480 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:23:50.537199   71480 start.go:495] detecting cgroup driver to use...
	I0816 00:23:50.537295   71480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:23:50.556492   71480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:23:50.572929   71480 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:23:50.572988   71480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:23:50.589965   71480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:23:50.607107   71480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:23:50.732247   71480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:23:50.905725   71480 docker.go:233] disabling docker service ...
	I0816 00:23:50.905808   71480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:23:50.922184   71480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:23:50.937109   71480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:23:51.078931   71480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:23:51.225117   71480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:23:51.244023   71480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:23:51.267168   71480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:23:51.267238   71480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:23:51.279241   71480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:23:51.279315   71480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:23:51.293683   71480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:23:51.308347   71480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:23:51.320422   71480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:23:51.333589   71480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:23:51.345349   71480 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:23:51.345411   71480 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:23:51.361313   71480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:23:51.372684   71480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:23:51.531753   71480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:23:51.701909   71480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:23:51.701971   71480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:23:51.708460   71480 start.go:563] Will wait 60s for crictl version
	I0816 00:23:51.708543   71480 ssh_runner.go:195] Run: which crictl
	I0816 00:23:51.713592   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:23:51.755798   71480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:23:51.755888   71480 ssh_runner.go:195] Run: crio --version
	I0816 00:23:51.787377   71480 ssh_runner.go:195] Run: crio --version
	I0816 00:23:51.822578   71480 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:23:51.823926   71480 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:23:51.826603   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:51.826947   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:23:39 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:23:51.826976   71480 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:23:51.827243   71480 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:23:51.831958   71480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:23:51.846779   71480 kubeadm.go:883] updating cluster {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:23:51.846931   71480 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:23:51.846988   71480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:23:51.886654   71480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:23:51.886732   71480 ssh_runner.go:195] Run: which lz4
	I0816 00:23:51.891884   71480 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:23:51.897398   71480 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:23:51.897433   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:23:53.665082   71480 crio.go:462] duration metric: took 1.773245131s to copy over tarball
	I0816 00:23:53.665153   71480 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:23:56.544373   71480 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.879195194s)
	I0816 00:23:56.544395   71480 crio.go:469] duration metric: took 2.879284718s to extract the tarball
	I0816 00:23:56.544402   71480 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:23:56.602304   71480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:23:56.666424   71480 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:23:56.666457   71480 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:23:56.666531   71480 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:23:56.666543   71480 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:23:56.666548   71480 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:23:56.666593   71480 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:23:56.666603   71480 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:23:56.666608   71480 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:23:56.666764   71480 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:23:56.666574   71480 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:23:56.668944   71480 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:23:56.669266   71480 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:23:56.669275   71480 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:23:56.669281   71480 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:23:56.669384   71480 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:23:56.669459   71480 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:23:56.669391   71480 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:23:56.670683   71480 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:23:56.833560   71480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:23:56.835053   71480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:23:56.838977   71480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:23:56.840273   71480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:23:56.843050   71480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:23:56.847426   71480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:23:56.885497   71480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:23:56.967695   71480 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:23:57.023682   71480 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:23:57.023741   71480 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:23:57.023790   71480 ssh_runner.go:195] Run: which crictl
	I0816 00:23:57.023873   71480 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:23:57.023906   71480 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:23:57.023933   71480 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:23:57.023949   71480 ssh_runner.go:195] Run: which crictl
	I0816 00:23:57.023966   71480 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:23:57.023996   71480 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:23:57.024007   71480 ssh_runner.go:195] Run: which crictl
	I0816 00:23:57.024022   71480 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:23:57.024054   71480 ssh_runner.go:195] Run: which crictl
	I0816 00:23:57.069580   71480 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:23:57.069621   71480 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:23:57.069629   71480 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:23:57.069652   71480 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:23:57.069682   71480 ssh_runner.go:195] Run: which crictl
	I0816 00:23:57.069693   71480 ssh_runner.go:195] Run: which crictl
	I0816 00:23:57.089741   71480 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:23:57.089788   71480 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:23:57.089863   71480 ssh_runner.go:195] Run: which crictl
	I0816 00:23:57.201861   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:23:57.201891   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:23:57.201912   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:23:57.201917   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:23:57.201837   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:23:57.201962   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:23:57.201983   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:23:57.397784   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:23:57.397921   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:23:57.397974   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:23:57.397995   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:23:57.398068   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:23:57.398111   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:23:57.398186   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:23:57.530373   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:23:57.566217   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:23:57.566343   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:23:57.566393   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:23:57.566421   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:23:57.566475   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:23:57.566492   71480 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:23:57.653894   71480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:23:57.731772   71480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:23:57.731840   71480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:23:57.732071   71480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:23:57.740586   71480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:23:57.740650   71480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:23:57.740686   71480 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:23:57.740731   71480 cache_images.go:92] duration metric: took 1.074259279s to LoadCachedImages
	W0816 00:23:57.740799   71480 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0816 00:23:57.740814   71480 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0816 00:23:57.740994   71480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-098619 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:23:57.741209   71480 ssh_runner.go:195] Run: crio config
	I0816 00:23:57.813807   71480 cni.go:84] Creating CNI manager for ""
	I0816 00:23:57.813830   71480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:23:57.813862   71480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:23:57.813887   71480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098619 NodeName:old-k8s-version-098619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:23:57.814044   71480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-098619"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:23:57.814114   71480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:23:57.827341   71480 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:23:57.827405   71480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:23:57.840760   71480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 00:23:57.864626   71480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:23:57.890374   71480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 00:23:57.910421   71480 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0816 00:23:57.914691   71480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:23:57.928645   71480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:23:58.075463   71480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:23:58.095142   71480 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619 for IP: 192.168.72.137
	I0816 00:23:58.095165   71480 certs.go:194] generating shared ca certs ...
	I0816 00:23:58.095188   71480 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:23:58.095350   71480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:23:58.095416   71480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:23:58.095428   71480 certs.go:256] generating profile certs ...
	I0816 00:23:58.095494   71480 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key
	I0816 00:23:58.095512   71480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.crt with IP's: []
	I0816 00:23:58.304674   71480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.crt ...
	I0816 00:23:58.304703   71480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.crt: {Name:mke7fc0eca18c780fc6ab06471f19438d004102d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:23:58.304892   71480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key ...
	I0816 00:23:58.304939   71480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key: {Name:mk8cdfa3a05543970a9cc87426dce9e2654be571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:23:58.305060   71480 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4
	I0816 00:23:58.305078   71480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt.97f18ce4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.137]
	I0816 00:23:58.828487   71480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt.97f18ce4 ...
	I0816 00:23:58.828517   71480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt.97f18ce4: {Name:mk139e8bdb440a4dddcba84b4db4fccc827277ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:23:58.828673   71480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4 ...
	I0816 00:23:58.828689   71480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4: {Name:mkb48a9eda56a1843e084dacb0b0ee7f1836ade4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:23:58.828783   71480 certs.go:381] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt.97f18ce4 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt
	I0816 00:23:58.828869   71480 certs.go:385] copying /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4 -> /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key
	I0816 00:23:58.828951   71480 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key
	I0816 00:23:58.828968   71480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt with IP's: []
	I0816 00:23:58.902191   71480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt ...
	I0816 00:23:58.902229   71480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt: {Name:mkb8e4c92209756e79f53bffdd8a3937df3ac6b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:23:58.902436   71480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key ...
	I0816 00:23:58.902452   71480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key: {Name:mke65d7e7f8c15c4e8193dbcfd0d4e0912925d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:23:58.902673   71480 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:23:58.902720   71480 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:23:58.902734   71480 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:23:58.902767   71480 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:23:58.902798   71480 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:23:58.902832   71480 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:23:58.902883   71480 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:23:58.903517   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:23:58.930564   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:23:58.960364   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:23:58.988646   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:23:59.023103   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 00:23:59.055680   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:23:59.101872   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:23:59.136360   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:23:59.161547   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:23:59.188440   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:23:59.217549   71480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:23:59.250604   71480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:23:59.269267   71480 ssh_runner.go:195] Run: openssl version
	I0816 00:23:59.275614   71480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:23:59.289188   71480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:23:59.293889   71480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:23:59.293966   71480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:23:59.301078   71480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:23:59.315841   71480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:23:59.328973   71480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:23:59.333675   71480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:23:59.333726   71480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:23:59.339737   71480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:23:59.352945   71480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:23:59.365693   71480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:23:59.370716   71480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:23:59.370766   71480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:23:59.376935   71480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:23:59.390055   71480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:23:59.394344   71480 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 00:23:59.394407   71480 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:23:59.394498   71480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:23:59.394577   71480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:23:59.447937   71480 cri.go:89] found id: ""
	I0816 00:23:59.448015   71480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:23:59.459318   71480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:23:59.471868   71480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:23:59.484358   71480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:23:59.484378   71480 kubeadm.go:157] found existing configuration files:
	
	I0816 00:23:59.484427   71480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:23:59.497111   71480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:23:59.497182   71480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:23:59.508856   71480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:23:59.520445   71480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:23:59.520516   71480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:23:59.532177   71480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:23:59.554162   71480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:23:59.554222   71480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:23:59.567270   71480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:23:59.585509   71480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:23:59.585556   71480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:23:59.600379   71480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:23:59.956069   71480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:25:57.721070   71480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:25:57.721151   71480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:25:57.722862   71480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:25:57.722932   71480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:25:57.723032   71480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:25:57.723118   71480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:25:57.723211   71480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:25:57.723272   71480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:25:57.724890   71480 out.go:235]   - Generating certificates and keys ...
	I0816 00:25:57.724986   71480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:25:57.725063   71480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:25:57.725165   71480 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 00:25:57.725238   71480 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 00:25:57.725307   71480 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 00:25:57.725363   71480 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 00:25:57.725430   71480 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 00:25:57.725619   71480 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-098619] and IPs [192.168.72.137 127.0.0.1 ::1]
	I0816 00:25:57.725718   71480 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 00:25:57.725894   71480 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-098619] and IPs [192.168.72.137 127.0.0.1 ::1]
	I0816 00:25:57.725965   71480 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 00:25:57.726035   71480 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 00:25:57.726207   71480 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 00:25:57.726302   71480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:25:57.726393   71480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:25:57.726472   71480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:25:57.726589   71480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:25:57.726661   71480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:25:57.726747   71480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:25:57.726823   71480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:25:57.726857   71480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:25:57.726922   71480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:25:57.728291   71480 out.go:235]   - Booting up control plane ...
	I0816 00:25:57.728367   71480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:25:57.728440   71480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:25:57.728527   71480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:25:57.728633   71480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:25:57.728849   71480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:25:57.728923   71480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:25:57.729019   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:25:57.729235   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:25:57.729329   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:25:57.729586   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:25:57.729685   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:25:57.729893   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:25:57.729974   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:25:57.730177   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:25:57.730256   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:25:57.730472   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:25:57.730483   71480 kubeadm.go:310] 
	I0816 00:25:57.730517   71480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:25:57.730554   71480 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:25:57.730561   71480 kubeadm.go:310] 
	I0816 00:25:57.730590   71480 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:25:57.730622   71480 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:25:57.730712   71480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:25:57.730720   71480 kubeadm.go:310] 
	I0816 00:25:57.730813   71480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:25:57.730842   71480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:25:57.730955   71480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:25:57.730972   71480 kubeadm.go:310] 
	I0816 00:25:57.731121   71480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:25:57.731240   71480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:25:57.731250   71480 kubeadm.go:310] 
	I0816 00:25:57.731388   71480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:25:57.731518   71480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:25:57.731608   71480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:25:57.731683   71480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:25:57.731713   71480 kubeadm.go:310] 
	W0816 00:25:57.731819   71480 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-098619] and IPs [192.168.72.137 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-098619] and IPs [192.168.72.137 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-098619] and IPs [192.168.72.137 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-098619] and IPs [192.168.72.137 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:25:57.731855   71480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:25:59.061148   71480 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.329263599s)
	I0816 00:25:59.061244   71480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:25:59.076053   71480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:25:59.086099   71480 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:25:59.086119   71480 kubeadm.go:157] found existing configuration files:
	
	I0816 00:25:59.086170   71480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:25:59.098650   71480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:25:59.098719   71480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:25:59.110255   71480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:25:59.121289   71480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:25:59.121355   71480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:25:59.132527   71480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:25:59.143859   71480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:25:59.143935   71480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:25:59.154586   71480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:25:59.164414   71480 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:25:59.164472   71480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:25:59.174224   71480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:25:59.400193   71480 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:27:55.515429   71480 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:27:55.515509   71480 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:27:55.516956   71480 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:27:55.517041   71480 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:27:55.517134   71480 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:27:55.517249   71480 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:27:55.517361   71480 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:27:55.517418   71480 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:27:55.519039   71480 out.go:235]   - Generating certificates and keys ...
	I0816 00:27:55.519108   71480 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:27:55.519160   71480 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:27:55.519258   71480 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:27:55.519351   71480 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:27:55.519450   71480 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:27:55.519525   71480 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:27:55.519618   71480 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:27:55.519676   71480 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:27:55.519757   71480 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:27:55.519833   71480 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:27:55.519871   71480 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:27:55.519920   71480 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:27:55.519965   71480 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:27:55.520011   71480 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:27:55.520069   71480 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:27:55.520116   71480 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:27:55.520223   71480 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:27:55.520342   71480 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:27:55.520405   71480 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:27:55.520496   71480 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:27:55.522140   71480 out.go:235]   - Booting up control plane ...
	I0816 00:27:55.522235   71480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:27:55.522314   71480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:27:55.522374   71480 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:27:55.522456   71480 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:27:55.522607   71480 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:27:55.522651   71480 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:27:55.522718   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:27:55.522868   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:27:55.522931   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:27:55.523097   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:27:55.523168   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:27:55.523350   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:27:55.523422   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:27:55.523577   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:27:55.523637   71480 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:27:55.523796   71480 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:27:55.523805   71480 kubeadm.go:310] 
	I0816 00:27:55.523838   71480 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:27:55.523876   71480 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:27:55.523882   71480 kubeadm.go:310] 
	I0816 00:27:55.523934   71480 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:27:55.523964   71480 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:27:55.524054   71480 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:27:55.524063   71480 kubeadm.go:310] 
	I0816 00:27:55.524151   71480 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:27:55.524194   71480 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:27:55.524239   71480 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:27:55.524253   71480 kubeadm.go:310] 
	I0816 00:27:55.524352   71480 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:27:55.524428   71480 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:27:55.524434   71480 kubeadm.go:310] 
	I0816 00:27:55.524529   71480 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:27:55.524608   71480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:27:55.524676   71480 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:27:55.524741   71480 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:27:55.524761   71480 kubeadm.go:310] 
	I0816 00:27:55.524792   71480 kubeadm.go:394] duration metric: took 3m56.130389863s to StartCluster
	I0816 00:27:55.524832   71480 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:27:55.524887   71480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:27:55.567584   71480 cri.go:89] found id: ""
	I0816 00:27:55.567633   71480 logs.go:276] 0 containers: []
	W0816 00:27:55.567646   71480 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:27:55.567660   71480 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:27:55.567723   71480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:27:55.603407   71480 cri.go:89] found id: ""
	I0816 00:27:55.603434   71480 logs.go:276] 0 containers: []
	W0816 00:27:55.603445   71480 logs.go:278] No container was found matching "etcd"
	I0816 00:27:55.603452   71480 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:27:55.603512   71480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:27:55.638774   71480 cri.go:89] found id: ""
	I0816 00:27:55.638800   71480 logs.go:276] 0 containers: []
	W0816 00:27:55.638809   71480 logs.go:278] No container was found matching "coredns"
	I0816 00:27:55.638814   71480 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:27:55.638876   71480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:27:55.673827   71480 cri.go:89] found id: ""
	I0816 00:27:55.673859   71480 logs.go:276] 0 containers: []
	W0816 00:27:55.673867   71480 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:27:55.673873   71480 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:27:55.673927   71480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:27:55.711414   71480 cri.go:89] found id: ""
	I0816 00:27:55.711436   71480 logs.go:276] 0 containers: []
	W0816 00:27:55.711443   71480 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:27:55.711449   71480 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:27:55.711499   71480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:27:55.747525   71480 cri.go:89] found id: ""
	I0816 00:27:55.747546   71480 logs.go:276] 0 containers: []
	W0816 00:27:55.747554   71480 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:27:55.747560   71480 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:27:55.747612   71480 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:27:55.781629   71480 cri.go:89] found id: ""
	I0816 00:27:55.781659   71480 logs.go:276] 0 containers: []
	W0816 00:27:55.781670   71480 logs.go:278] No container was found matching "kindnet"
	I0816 00:27:55.781681   71480 logs.go:123] Gathering logs for kubelet ...
	I0816 00:27:55.781697   71480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:27:55.832807   71480 logs.go:123] Gathering logs for dmesg ...
	I0816 00:27:55.832839   71480 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:27:55.847159   71480 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:27:55.847184   71480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:27:56.003956   71480 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:27:56.003985   71480 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:27:56.004000   71480 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:27:56.115481   71480 logs.go:123] Gathering logs for container status ...
	I0816 00:27:56.115516   71480 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0816 00:27:56.153683   71480 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:27:56.153739   71480 out.go:270] * 
	* 
	W0816 00:27:56.153798   71480 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:27:56.153817   71480 out.go:270] * 
	* 
	W0816 00:27:56.154834   71480 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:27:56.158632   71480 out.go:201] 
	W0816 00:27:56.159911   71480 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:27:56.159953   71480 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:27:56.159971   71480 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:27:56.161372   71480 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-098619 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 6 (219.956532ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:27:56.429542   78090 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-098619" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (300.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-819398 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-819398 --alsologtostderr -v=3: exit status 82 (2m0.476895116s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-819398"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 00:25:55.415746   77410 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:25:55.415863   77410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:25:55.415871   77410 out.go:358] Setting ErrFile to fd 2...
	I0816 00:25:55.415875   77410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:25:55.416064   77410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:25:55.416263   77410 out.go:352] Setting JSON to false
	I0816 00:25:55.416335   77410 mustload.go:65] Loading cluster: no-preload-819398
	I0816 00:25:55.416658   77410 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:25:55.416731   77410 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/config.json ...
	I0816 00:25:55.416896   77410 mustload.go:65] Loading cluster: no-preload-819398
	I0816 00:25:55.417004   77410 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:25:55.417027   77410 stop.go:39] StopHost: no-preload-819398
	I0816 00:25:55.417417   77410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:25:55.417452   77410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:25:55.432431   77410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I0816 00:25:55.432887   77410 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:25:55.433462   77410 main.go:141] libmachine: Using API Version  1
	I0816 00:25:55.433484   77410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:25:55.433804   77410 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:25:55.436102   77410 out.go:177] * Stopping node "no-preload-819398"  ...
	I0816 00:25:55.437432   77410 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 00:25:55.437470   77410 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:25:55.437683   77410 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 00:25:55.437706   77410 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:25:55.440856   77410 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:25:55.441368   77410 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:24:20 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:25:55.441396   77410 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:25:55.441557   77410 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:25:55.441734   77410 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:25:55.441911   77410 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:25:55.442066   77410 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:25:55.528228   77410 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 00:25:55.586862   77410 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 00:25:55.649122   77410 main.go:141] libmachine: Stopping "no-preload-819398"...
	I0816 00:25:55.649161   77410 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:25:55.650792   77410 main.go:141] libmachine: (no-preload-819398) Calling .Stop
	I0816 00:25:55.654521   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 0/120
	I0816 00:25:56.656715   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 1/120
	I0816 00:25:57.657992   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 2/120
	I0816 00:25:58.659535   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 3/120
	I0816 00:25:59.661041   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 4/120
	I0816 00:26:00.663102   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 5/120
	I0816 00:26:01.665129   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 6/120
	I0816 00:26:02.666488   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 7/120
	I0816 00:26:03.667927   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 8/120
	I0816 00:26:04.669400   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 9/120
	I0816 00:26:05.671759   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 10/120
	I0816 00:26:06.673174   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 11/120
	I0816 00:26:07.674706   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 12/120
	I0816 00:26:08.676355   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 13/120
	I0816 00:26:09.678021   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 14/120
	I0816 00:26:10.680075   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 15/120
	I0816 00:26:11.681326   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 16/120
	I0816 00:26:12.682713   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 17/120
	I0816 00:26:13.683977   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 18/120
	I0816 00:26:14.685367   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 19/120
	I0816 00:26:15.687462   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 20/120
	I0816 00:26:16.688885   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 21/120
	I0816 00:26:17.690120   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 22/120
	I0816 00:26:18.692437   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 23/120
	I0816 00:26:19.693881   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 24/120
	I0816 00:26:20.695934   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 25/120
	I0816 00:26:21.697288   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 26/120
	I0816 00:26:22.698753   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 27/120
	I0816 00:26:23.700226   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 28/120
	I0816 00:26:24.701729   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 29/120
	I0816 00:26:25.704153   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 30/120
	I0816 00:26:26.705350   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 31/120
	I0816 00:26:27.706871   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 32/120
	I0816 00:26:28.708213   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 33/120
	I0816 00:26:29.709762   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 34/120
	I0816 00:26:30.711680   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 35/120
	I0816 00:26:31.713215   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 36/120
	I0816 00:26:32.714782   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 37/120
	I0816 00:26:33.716201   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 38/120
	I0816 00:26:34.718073   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 39/120
	I0816 00:26:35.719286   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 40/120
	I0816 00:26:36.720588   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 41/120
	I0816 00:26:37.722101   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 42/120
	I0816 00:26:38.723613   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 43/120
	I0816 00:26:39.724929   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 44/120
	I0816 00:26:40.727009   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 45/120
	I0816 00:26:41.728289   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 46/120
	I0816 00:26:42.729765   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 47/120
	I0816 00:26:43.731170   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 48/120
	I0816 00:26:44.732454   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 49/120
	I0816 00:26:45.734775   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 50/120
	I0816 00:26:46.735887   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 51/120
	I0816 00:26:47.737224   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 52/120
	I0816 00:26:48.738488   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 53/120
	I0816 00:26:49.739943   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 54/120
	I0816 00:26:50.741923   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 55/120
	I0816 00:26:51.743264   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 56/120
	I0816 00:26:52.744627   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 57/120
	I0816 00:26:53.745880   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 58/120
	I0816 00:26:54.747039   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 59/120
	I0816 00:26:55.748962   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 60/120
	I0816 00:26:56.751346   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 61/120
	I0816 00:26:57.752580   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 62/120
	I0816 00:26:58.753992   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 63/120
	I0816 00:26:59.755238   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 64/120
	I0816 00:27:00.757289   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 65/120
	I0816 00:27:01.758862   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 66/120
	I0816 00:27:02.760248   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 67/120
	I0816 00:27:03.761655   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 68/120
	I0816 00:27:04.763178   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 69/120
	I0816 00:27:05.765422   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 70/120
	I0816 00:27:06.766742   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 71/120
	I0816 00:27:07.768000   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 72/120
	I0816 00:27:08.769255   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 73/120
	I0816 00:27:09.770462   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 74/120
	I0816 00:27:10.772389   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 75/120
	I0816 00:27:11.773609   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 76/120
	I0816 00:27:12.774796   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 77/120
	I0816 00:27:13.776095   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 78/120
	I0816 00:27:14.777408   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 79/120
	I0816 00:27:15.779643   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 80/120
	I0816 00:27:16.780987   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 81/120
	I0816 00:27:17.782406   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 82/120
	I0816 00:27:18.783785   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 83/120
	I0816 00:27:19.785080   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 84/120
	I0816 00:27:20.787052   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 85/120
	I0816 00:27:21.788391   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 86/120
	I0816 00:27:22.789820   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 87/120
	I0816 00:27:23.791117   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 88/120
	I0816 00:27:24.792545   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 89/120
	I0816 00:27:25.794722   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 90/120
	I0816 00:27:26.796994   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 91/120
	I0816 00:27:27.798338   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 92/120
	I0816 00:27:28.800307   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 93/120
	I0816 00:27:29.801639   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 94/120
	I0816 00:27:30.803589   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 95/120
	I0816 00:27:31.805080   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 96/120
	I0816 00:27:32.806381   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 97/120
	I0816 00:27:33.807931   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 98/120
	I0816 00:27:34.809249   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 99/120
	I0816 00:27:35.811404   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 100/120
	I0816 00:27:36.813380   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 101/120
	I0816 00:27:37.814969   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 102/120
	I0816 00:27:38.816581   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 103/120
	I0816 00:27:39.817914   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 104/120
	I0816 00:27:40.819976   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 105/120
	I0816 00:27:41.821193   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 106/120
	I0816 00:27:42.822417   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 107/120
	I0816 00:27:43.823986   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 108/120
	I0816 00:27:44.825213   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 109/120
	I0816 00:27:45.827431   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 110/120
	I0816 00:27:46.828989   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 111/120
	I0816 00:27:47.830376   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 112/120
	I0816 00:27:48.832180   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 113/120
	I0816 00:27:49.833463   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 114/120
	I0816 00:27:50.835287   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 115/120
	I0816 00:27:51.837096   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 116/120
	I0816 00:27:52.839281   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 117/120
	I0816 00:27:53.840872   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 118/120
	I0816 00:27:54.842224   77410 main.go:141] libmachine: (no-preload-819398) Waiting for machine to stop 119/120
	I0816 00:27:55.842758   77410 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 00:27:55.842829   77410 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 00:27:55.844734   77410 out.go:201] 
	W0816 00:27:55.845938   77410 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 00:27:55.845952   77410 out.go:270] * 
	* 
	W0816 00:27:55.850188   77410 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:27:55.851413   77410 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-819398 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398: exit status 3 (18.589242777s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:14.442181   78060 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.15:22: connect: no route to host
	E0816 00:28:14.442199   78060 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.15:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-819398" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-758469 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-758469 --alsologtostderr -v=3: exit status 82 (2m0.496816961s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-758469"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 00:26:07.802949   77620 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:26:07.803086   77620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:26:07.803096   77620 out.go:358] Setting ErrFile to fd 2...
	I0816 00:26:07.803100   77620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:26:07.803331   77620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:26:07.803623   77620 out.go:352] Setting JSON to false
	I0816 00:26:07.803703   77620 mustload.go:65] Loading cluster: embed-certs-758469
	I0816 00:26:07.804033   77620 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:26:07.804102   77620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/config.json ...
	I0816 00:26:07.804261   77620 mustload.go:65] Loading cluster: embed-certs-758469
	I0816 00:26:07.804397   77620 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:26:07.804434   77620 stop.go:39] StopHost: embed-certs-758469
	I0816 00:26:07.804820   77620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:26:07.804859   77620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:26:07.819575   77620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0816 00:26:07.820040   77620 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:26:07.820593   77620 main.go:141] libmachine: Using API Version  1
	I0816 00:26:07.820620   77620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:26:07.821002   77620 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:26:07.823087   77620 out.go:177] * Stopping node "embed-certs-758469"  ...
	I0816 00:26:07.824406   77620 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 00:26:07.824448   77620 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:26:07.824681   77620 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 00:26:07.824704   77620 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:26:07.827333   77620 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:26:07.827711   77620 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:24:46 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:26:07.827742   77620 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:26:07.827902   77620 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:26:07.828089   77620 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:26:07.828257   77620 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:26:07.828413   77620 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:26:07.940854   77620 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 00:26:07.998648   77620 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 00:26:08.060008   77620 main.go:141] libmachine: Stopping "embed-certs-758469"...
	I0816 00:26:08.060051   77620 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:26:08.061975   77620 main.go:141] libmachine: (embed-certs-758469) Calling .Stop
	I0816 00:26:08.065919   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 0/120
	I0816 00:26:09.067457   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 1/120
	I0816 00:26:10.068594   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 2/120
	I0816 00:26:11.069957   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 3/120
	I0816 00:26:12.071115   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 4/120
	I0816 00:26:13.073088   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 5/120
	I0816 00:26:14.074542   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 6/120
	I0816 00:26:15.075725   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 7/120
	I0816 00:26:16.077102   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 8/120
	I0816 00:26:17.078305   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 9/120
	I0816 00:26:18.080491   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 10/120
	I0816 00:26:19.082029   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 11/120
	I0816 00:26:20.083323   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 12/120
	I0816 00:26:21.084719   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 13/120
	I0816 00:26:22.085916   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 14/120
	I0816 00:26:23.087719   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 15/120
	I0816 00:26:24.089135   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 16/120
	I0816 00:26:25.090357   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 17/120
	I0816 00:26:26.091825   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 18/120
	I0816 00:26:27.093052   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 19/120
	I0816 00:26:28.095320   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 20/120
	I0816 00:26:29.096835   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 21/120
	I0816 00:26:30.097993   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 22/120
	I0816 00:26:31.099437   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 23/120
	I0816 00:26:32.100892   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 24/120
	I0816 00:26:33.102903   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 25/120
	I0816 00:26:34.104133   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 26/120
	I0816 00:26:35.105555   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 27/120
	I0816 00:26:36.106825   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 28/120
	I0816 00:26:37.108279   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 29/120
	I0816 00:26:38.109680   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 30/120
	I0816 00:26:39.111018   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 31/120
	I0816 00:26:40.112376   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 32/120
	I0816 00:26:41.113786   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 33/120
	I0816 00:26:42.115199   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 34/120
	I0816 00:26:43.117305   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 35/120
	I0816 00:26:44.119512   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 36/120
	I0816 00:26:45.120958   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 37/120
	I0816 00:26:46.122244   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 38/120
	I0816 00:26:47.123647   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 39/120
	I0816 00:26:48.126013   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 40/120
	I0816 00:26:49.127298   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 41/120
	I0816 00:26:50.128660   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 42/120
	I0816 00:26:51.129881   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 43/120
	I0816 00:26:52.131343   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 44/120
	I0816 00:26:53.133238   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 45/120
	I0816 00:26:54.134705   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 46/120
	I0816 00:26:55.135945   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 47/120
	I0816 00:26:56.137263   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 48/120
	I0816 00:26:57.138504   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 49/120
	I0816 00:26:58.140619   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 50/120
	I0816 00:26:59.142070   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 51/120
	I0816 00:27:00.143400   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 52/120
	I0816 00:27:01.144843   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 53/120
	I0816 00:27:02.146143   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 54/120
	I0816 00:27:03.148311   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 55/120
	I0816 00:27:04.150696   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 56/120
	I0816 00:27:05.152154   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 57/120
	I0816 00:27:06.153675   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 58/120
	I0816 00:27:07.155017   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 59/120
	I0816 00:27:08.156793   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 60/120
	I0816 00:27:09.158146   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 61/120
	I0816 00:27:10.160179   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 62/120
	I0816 00:27:11.161594   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 63/120
	I0816 00:27:12.162838   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 64/120
	I0816 00:27:13.164755   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 65/120
	I0816 00:27:14.166216   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 66/120
	I0816 00:27:15.167677   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 67/120
	I0816 00:27:16.169026   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 68/120
	I0816 00:27:17.170577   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 69/120
	I0816 00:27:18.172273   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 70/120
	I0816 00:27:19.173747   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 71/120
	I0816 00:27:20.175114   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 72/120
	I0816 00:27:21.176277   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 73/120
	I0816 00:27:22.177800   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 74/120
	I0816 00:27:23.179677   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 75/120
	I0816 00:27:24.180938   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 76/120
	I0816 00:27:25.182447   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 77/120
	I0816 00:27:26.184305   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 78/120
	I0816 00:27:27.185862   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 79/120
	I0816 00:27:28.187883   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 80/120
	I0816 00:27:29.189171   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 81/120
	I0816 00:27:30.190659   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 82/120
	I0816 00:27:31.192027   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 83/120
	I0816 00:27:32.193475   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 84/120
	I0816 00:27:33.195370   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 85/120
	I0816 00:27:34.196646   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 86/120
	I0816 00:27:35.198211   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 87/120
	I0816 00:27:36.199519   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 88/120
	I0816 00:27:37.201082   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 89/120
	I0816 00:27:38.203150   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 90/120
	I0816 00:27:39.204663   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 91/120
	I0816 00:27:40.206082   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 92/120
	I0816 00:27:41.208295   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 93/120
	I0816 00:27:42.209527   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 94/120
	I0816 00:27:43.211489   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 95/120
	I0816 00:27:44.212845   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 96/120
	I0816 00:27:45.214850   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 97/120
	I0816 00:27:46.216116   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 98/120
	I0816 00:27:47.217520   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 99/120
	I0816 00:27:48.219637   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 100/120
	I0816 00:27:49.221004   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 101/120
	I0816 00:27:50.222475   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 102/120
	I0816 00:27:51.223710   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 103/120
	I0816 00:27:52.224904   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 104/120
	I0816 00:27:53.226861   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 105/120
	I0816 00:27:54.228174   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 106/120
	I0816 00:27:55.229506   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 107/120
	I0816 00:27:56.231145   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 108/120
	I0816 00:27:57.233432   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 109/120
	I0816 00:27:58.235366   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 110/120
	I0816 00:27:59.236986   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 111/120
	I0816 00:28:00.238276   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 112/120
	I0816 00:28:01.239829   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 113/120
	I0816 00:28:02.241217   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 114/120
	I0816 00:28:03.243389   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 115/120
	I0816 00:28:04.244605   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 116/120
	I0816 00:28:05.246546   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 117/120
	I0816 00:28:06.247999   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 118/120
	I0816 00:28:07.249424   77620 main.go:141] libmachine: (embed-certs-758469) Waiting for machine to stop 119/120
	I0816 00:28:08.250029   77620 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 00:28:08.250089   77620 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 00:28:08.251907   77620 out.go:201] 
	W0816 00:28:08.253059   77620 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 00:28:08.253071   77620 out.go:270] * 
	* 
	W0816 00:28:08.256191   77620 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:28:08.257476   77620 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-758469 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469: exit status 3 (18.470490749s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:26.730133   78270 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	E0816 00:28:26.730153   78270 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-758469" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-616827 --alsologtostderr -v=3
E0816 00:26:31.509346   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:31.515773   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:31.527265   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:31.548645   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:31.590015   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:31.671448   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:31.833078   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:32.155206   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:32.797141   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:34.078572   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:36.640644   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:41.762629   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:47.150986   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:26:52.004048   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:09.800888   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:09.807254   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:09.818608   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:09.840008   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:09.881886   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:09.963343   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:10.124933   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:10.446676   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:11.088770   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:12.370211   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:12.485715   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:14.932343   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:20.054287   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:28.472646   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:28.479065   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:28.490389   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:28.511815   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:28.553213   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:28.634682   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:28.796186   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:29.118098   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:29.760104   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:30.296058   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:31.041495   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:33.603007   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:38.724374   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:48.966037   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:50.778317   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:51.159916   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:27:53.447413   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-616827 --alsologtostderr -v=3: exit status 82 (2m0.477147447s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-616827"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 00:26:08.536460   77648 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:26:08.536566   77648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:26:08.536574   77648 out.go:358] Setting ErrFile to fd 2...
	I0816 00:26:08.536578   77648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:26:08.536752   77648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:26:08.537005   77648 out.go:352] Setting JSON to false
	I0816 00:26:08.537081   77648 mustload.go:65] Loading cluster: default-k8s-diff-port-616827
	I0816 00:26:08.537411   77648 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:26:08.537491   77648 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/config.json ...
	I0816 00:26:08.537648   77648 mustload.go:65] Loading cluster: default-k8s-diff-port-616827
	I0816 00:26:08.537762   77648 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:26:08.537796   77648 stop.go:39] StopHost: default-k8s-diff-port-616827
	I0816 00:26:08.538196   77648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:26:08.538233   77648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:26:08.552798   77648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I0816 00:26:08.553348   77648 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:26:08.553878   77648 main.go:141] libmachine: Using API Version  1
	I0816 00:26:08.553904   77648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:26:08.554264   77648 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:26:08.556749   77648 out.go:177] * Stopping node "default-k8s-diff-port-616827"  ...
	I0816 00:26:08.557966   77648 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0816 00:26:08.557992   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:26:08.558222   77648 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0816 00:26:08.558252   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:26:08.560991   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:26:08.561435   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:25:16 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:26:08.561471   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:26:08.561612   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:26:08.561805   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:26:08.561996   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:26:08.562155   77648 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:26:08.654357   77648 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0816 00:26:08.718143   77648 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0816 00:26:08.776496   77648 main.go:141] libmachine: Stopping "default-k8s-diff-port-616827"...
	I0816 00:26:08.776538   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:26:08.778210   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Stop
	I0816 00:26:08.782170   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 0/120
	I0816 00:26:09.783476   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 1/120
	I0816 00:26:10.784712   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 2/120
	I0816 00:26:11.786112   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 3/120
	I0816 00:26:12.787428   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 4/120
	I0816 00:26:13.789593   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 5/120
	I0816 00:26:14.790747   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 6/120
	I0816 00:26:15.792184   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 7/120
	I0816 00:26:16.793434   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 8/120
	I0816 00:26:17.794695   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 9/120
	I0816 00:26:18.796645   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 10/120
	I0816 00:26:19.797881   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 11/120
	I0816 00:26:20.799064   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 12/120
	I0816 00:26:21.800294   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 13/120
	I0816 00:26:22.801464   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 14/120
	I0816 00:26:23.803302   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 15/120
	I0816 00:26:24.804776   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 16/120
	I0816 00:26:25.805876   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 17/120
	I0816 00:26:26.807329   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 18/120
	I0816 00:26:27.808684   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 19/120
	I0816 00:26:28.810893   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 20/120
	I0816 00:26:29.812149   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 21/120
	I0816 00:26:30.813363   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 22/120
	I0816 00:26:31.814723   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 23/120
	I0816 00:26:32.816074   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 24/120
	I0816 00:26:33.818096   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 25/120
	I0816 00:26:34.819461   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 26/120
	I0816 00:26:35.820604   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 27/120
	I0816 00:26:36.821987   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 28/120
	I0816 00:26:37.823407   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 29/120
	I0816 00:26:38.825458   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 30/120
	I0816 00:26:39.826854   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 31/120
	I0816 00:26:40.828086   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 32/120
	I0816 00:26:41.829516   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 33/120
	I0816 00:26:42.830819   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 34/120
	I0816 00:26:43.832970   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 35/120
	I0816 00:26:44.834297   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 36/120
	I0816 00:26:45.836235   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 37/120
	I0816 00:26:46.837742   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 38/120
	I0816 00:26:47.839141   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 39/120
	I0816 00:26:48.841432   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 40/120
	I0816 00:26:49.842666   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 41/120
	I0816 00:26:50.844317   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 42/120
	I0816 00:26:51.845643   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 43/120
	I0816 00:26:52.847297   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 44/120
	I0816 00:26:53.849495   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 45/120
	I0816 00:26:54.850906   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 46/120
	I0816 00:26:55.852335   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 47/120
	I0816 00:26:56.853702   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 48/120
	I0816 00:26:57.855144   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 49/120
	I0816 00:26:58.857600   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 50/120
	I0816 00:26:59.858953   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 51/120
	I0816 00:27:00.860255   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 52/120
	I0816 00:27:01.861719   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 53/120
	I0816 00:27:02.863037   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 54/120
	I0816 00:27:03.864945   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 55/120
	I0816 00:27:04.866270   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 56/120
	I0816 00:27:05.867535   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 57/120
	I0816 00:27:06.868854   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 58/120
	I0816 00:27:07.870221   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 59/120
	I0816 00:27:08.872298   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 60/120
	I0816 00:27:09.873591   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 61/120
	I0816 00:27:10.875204   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 62/120
	I0816 00:27:11.876554   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 63/120
	I0816 00:27:12.877889   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 64/120
	I0816 00:27:13.879753   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 65/120
	I0816 00:27:14.880950   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 66/120
	I0816 00:27:15.882295   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 67/120
	I0816 00:27:16.883808   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 68/120
	I0816 00:27:17.885151   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 69/120
	I0816 00:27:18.887418   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 70/120
	I0816 00:27:19.888816   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 71/120
	I0816 00:27:20.890256   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 72/120
	I0816 00:27:21.891580   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 73/120
	I0816 00:27:22.893029   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 74/120
	I0816 00:27:23.895136   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 75/120
	I0816 00:27:24.896472   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 76/120
	I0816 00:27:25.897939   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 77/120
	I0816 00:27:26.899342   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 78/120
	I0816 00:27:27.900672   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 79/120
	I0816 00:27:28.902397   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 80/120
	I0816 00:27:29.903625   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 81/120
	I0816 00:27:30.905137   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 82/120
	I0816 00:27:31.906571   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 83/120
	I0816 00:27:32.908413   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 84/120
	I0816 00:27:33.910443   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 85/120
	I0816 00:27:34.911763   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 86/120
	I0816 00:27:35.913002   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 87/120
	I0816 00:27:36.914236   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 88/120
	I0816 00:27:37.915537   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 89/120
	I0816 00:27:38.917996   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 90/120
	I0816 00:27:39.919372   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 91/120
	I0816 00:27:40.920848   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 92/120
	I0816 00:27:41.922253   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 93/120
	I0816 00:27:42.923568   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 94/120
	I0816 00:27:43.925672   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 95/120
	I0816 00:27:44.927084   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 96/120
	I0816 00:27:45.928611   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 97/120
	I0816 00:27:46.930297   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 98/120
	I0816 00:27:47.931599   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 99/120
	I0816 00:27:48.933667   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 100/120
	I0816 00:27:49.935084   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 101/120
	I0816 00:27:50.936419   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 102/120
	I0816 00:27:51.938328   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 103/120
	I0816 00:27:52.939851   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 104/120
	I0816 00:27:53.942004   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 105/120
	I0816 00:27:54.943525   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 106/120
	I0816 00:27:55.944920   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 107/120
	I0816 00:27:56.946521   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 108/120
	I0816 00:27:57.947891   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 109/120
	I0816 00:27:58.949249   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 110/120
	I0816 00:27:59.950907   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 111/120
	I0816 00:28:00.952298   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 112/120
	I0816 00:28:01.954855   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 113/120
	I0816 00:28:02.956211   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 114/120
	I0816 00:28:03.958465   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 115/120
	I0816 00:28:04.959925   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 116/120
	I0816 00:28:05.961327   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 117/120
	I0816 00:28:06.962630   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 118/120
	I0816 00:28:07.964070   77648 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for machine to stop 119/120
	I0816 00:28:08.964637   77648 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0816 00:28:08.964700   77648 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0816 00:28:08.966326   77648 out.go:201] 
	W0816 00:28:08.967514   77648 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0816 00:28:08.967535   77648 out.go:270] * 
	* 
	W0816 00:28:08.971149   77648 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:28:08.972242   77648 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-616827 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
E0816 00:28:09.072866   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:09.447669   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827: exit status 3 (18.52382513s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:27.498195   78300 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host
	E0816 00:28:27.498214   78300 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-616827" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-098619 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-098619 create -f testdata/busybox.yaml: exit status 1 (42.909583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-098619" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-098619 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 6 (211.989305ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:27:56.685584   78131 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-098619" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 6 (215.938947ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:27:56.901423   78161 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-098619" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-098619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-098619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.097194643s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-098619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-098619 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-098619 describe deploy/metrics-server -n kube-system: exit status 1 (41.969993ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-098619" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-098619 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 6 (229.896576ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:29:47.270457   79059 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-098619" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398: exit status 3 (3.167778645s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:17.610214   78346 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.15:22: connect: no route to host
	E0816 00:28:17.610233   78346 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.15:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-819398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-819398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153135473s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.15:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-819398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398: exit status 3 (3.06352849s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:26.826179   78430 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.15:22: connect: no route to host
	E0816 00:28:26.826196   78430 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.15:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-819398" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469: exit status 3 (3.167901389s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:29.898231   78459 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	E0816 00:28:29.898250   78459 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-758469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-758469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152692536s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-758469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469: exit status 3 (3.063130365s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:39.114225   78637 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	E0816 00:28:39.114246   78637 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-758469" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827: exit status 3 (3.167935351s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:30.666202   78548 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host
	E0816 00:28:30.666227   78548 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-616827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0816 00:28:31.740143   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-616827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15332153s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-616827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
E0816 00:28:37.365010   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:37.371392   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:37.382800   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:37.404189   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:37.445648   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:37.527153   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:37.688732   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:38.010459   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:38.652820   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827: exit status 3 (3.062774243s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 00:28:39.882343   78667 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host
	E0816 00:28:39.882367   78667 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-616827" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (747.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-098619 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0816 00:29:53.662017   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:53.799542   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:59.303904   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:30:02.497179   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:30:04.943883   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:30:12.331483   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:30:25.212449   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:30:43.458689   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:30:52.915545   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:31:16.869463   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:31:21.226132   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:31:26.865471   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:31:31.509267   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:31:59.210531   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:32:05.380041   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:32:09.800522   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:32:28.472986   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:32:37.503973   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:32:51.159486   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:32:56.173261   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:33:37.365189   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:33:43.007245   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:34:05.067807   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:34:10.706790   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:34:21.519703   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:34:49.221892   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:34:53.800025   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:35:25.212510   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:36:31.510045   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:37:09.800579   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:37:28.473109   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:37:51.159993   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-098619 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m24.025234232s)

                                                
                                                
-- stdout --
	* [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-098619" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 00:29:51.785297   79191 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:29:51.785388   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785392   79191 out.go:358] Setting ErrFile to fd 2...
	I0816 00:29:51.785396   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785578   79191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:29:51.786145   79191 out.go:352] Setting JSON to false
	I0816 00:29:51.787066   79191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7892,"bootTime":1723760300,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:29:51.787122   79191 start.go:139] virtualization: kvm guest
	I0816 00:29:51.789057   79191 out.go:177] * [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:29:51.790274   79191 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:29:51.790269   79191 notify.go:220] Checking for updates...
	I0816 00:29:51.792828   79191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:29:51.794216   79191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:29:51.795553   79191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:29:51.796761   79191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:29:51.798018   79191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:29:51.799561   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:29:51.799935   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.799990   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.814617   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0816 00:29:51.815056   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.815584   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.815606   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.815933   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.816131   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:51.817809   79191 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 00:29:51.819204   79191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:29:51.819604   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.819652   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.834270   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0816 00:29:51.834584   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.834992   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.835015   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.835303   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.835478   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:51.870472   79191 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:29:51.872031   79191 start.go:297] selected driver: kvm2
	I0816 00:29:51.872049   79191 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.872137   79191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:29:51.872785   79191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.872848   79191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:29:51.887731   79191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:29:51.888078   79191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:29:51.888141   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:29:51.888154   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:29:51.888203   79191 start.go:340] cluster config:
	{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.888300   79191 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.890190   79191 out.go:177] * Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	I0816 00:29:51.891529   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:29:51.891557   79191 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:29:51.891565   79191 cache.go:56] Caching tarball of preloaded images
	I0816 00:29:51.891645   79191 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:29:51.891664   79191 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:29:51.891747   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:29:51.891915   79191 start.go:360] acquireMachinesLock for old-k8s-version-098619: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:33:44.254575   79191 start.go:364] duration metric: took 3m52.362627542s to acquireMachinesLock for "old-k8s-version-098619"
	I0816 00:33:44.254648   79191 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:44.254659   79191 fix.go:54] fixHost starting: 
	I0816 00:33:44.255099   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:44.255137   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:44.271236   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0816 00:33:44.271591   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:44.272030   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:33:44.272052   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:44.272328   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:44.272503   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:33:44.272660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetState
	I0816 00:33:44.274235   79191 fix.go:112] recreateIfNeeded on old-k8s-version-098619: state=Stopped err=<nil>
	I0816 00:33:44.274272   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	W0816 00:33:44.274415   79191 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:44.275978   79191 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-098619" ...
	I0816 00:33:44.277288   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .Start
	I0816 00:33:44.277426   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring networks are active...
	I0816 00:33:44.278141   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network default is active
	I0816 00:33:44.278471   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network mk-old-k8s-version-098619 is active
	I0816 00:33:44.278820   79191 main.go:141] libmachine: (old-k8s-version-098619) Getting domain xml...
	I0816 00:33:44.279523   79191 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:33:45.643704   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting to get IP...
	I0816 00:33:45.644691   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.645213   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.645247   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.645162   80212 retry.go:31] will retry after 198.057532ms: waiting for machine to come up
	I0816 00:33:45.844756   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.845297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.845321   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.845247   80212 retry.go:31] will retry after 288.630433ms: waiting for machine to come up
	I0816 00:33:46.135913   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.136413   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.136442   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.136365   80212 retry.go:31] will retry after 456.48021ms: waiting for machine to come up
	I0816 00:33:46.594170   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.594649   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.594678   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.594592   80212 retry.go:31] will retry after 501.49137ms: waiting for machine to come up
	I0816 00:33:47.098130   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.098614   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.098645   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.098569   80212 retry.go:31] will retry after 663.568587ms: waiting for machine to come up
	I0816 00:33:47.763930   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.764447   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.764470   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.764376   80212 retry.go:31] will retry after 679.581678ms: waiting for machine to come up
	I0816 00:33:48.446082   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:48.446552   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:48.446579   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:48.446498   80212 retry.go:31] will retry after 1.090430732s: waiting for machine to come up
	I0816 00:33:49.538961   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:49.539454   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:49.539482   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:49.539397   80212 retry.go:31] will retry after 1.039148258s: waiting for machine to come up
	I0816 00:33:50.579642   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:50.580119   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:50.580144   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:50.580074   80212 retry.go:31] will retry after 1.440992413s: waiting for machine to come up
	I0816 00:33:52.022573   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:52.023319   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:52.023352   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:52.023226   80212 retry.go:31] will retry after 1.814668747s: waiting for machine to come up
	I0816 00:33:53.839539   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:53.839916   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:53.839944   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:53.839861   80212 retry.go:31] will retry after 1.900379439s: waiting for machine to come up
	I0816 00:33:55.742480   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:55.742981   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:55.743004   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:55.742920   80212 retry.go:31] will retry after 2.798728298s: waiting for machine to come up
	I0816 00:33:58.543282   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:58.543753   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:58.543783   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:58.543689   80212 retry.go:31] will retry after 4.402812235s: waiting for machine to come up
	I0816 00:34:02.951078   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951631   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has current primary IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951672   79191 main.go:141] libmachine: (old-k8s-version-098619) Found IP for machine: 192.168.72.137
	I0816 00:34:02.951687   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserving static IP address...
	I0816 00:34:02.952154   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserved static IP address: 192.168.72.137
	I0816 00:34:02.952186   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.952201   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting for SSH to be available...
	I0816 00:34:02.952224   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | skip adding static IP to network mk-old-k8s-version-098619 - found existing host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"}
	I0816 00:34:02.952236   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Getting to WaitForSSH function...
	I0816 00:34:02.954361   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954686   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.954715   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954791   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH client type: external
	I0816 00:34:02.954830   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa (-rw-------)
	I0816 00:34:02.954871   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:02.954890   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | About to run SSH command:
	I0816 00:34:02.954909   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | exit 0
	I0816 00:34:03.078035   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:03.078408   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:34:03.079002   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.081041   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081391   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.081489   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081566   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:34:03.081748   79191 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:03.081767   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.082007   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.084022   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084333   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.084357   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084499   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.084700   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.084867   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.085074   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.085266   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.085509   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.085525   79191 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:03.186066   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:03.186094   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186368   79191 buildroot.go:166] provisioning hostname "old-k8s-version-098619"
	I0816 00:34:03.186397   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186597   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.189330   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189658   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.189702   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189792   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.190004   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190185   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190344   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.190481   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.190665   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.190688   79191 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098619 && echo "old-k8s-version-098619" | sudo tee /etc/hostname
	I0816 00:34:03.304585   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098619
	
	I0816 00:34:03.304608   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.307415   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307732   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.307763   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307955   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.308155   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308314   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308474   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.308629   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.308795   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.308811   79191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098619/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:03.418968   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:03.419010   79191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:03.419045   79191 buildroot.go:174] setting up certificates
	I0816 00:34:03.419058   79191 provision.go:84] configureAuth start
	I0816 00:34:03.419072   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.419338   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.421799   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422159   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.422198   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422401   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.425023   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425417   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.425445   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425557   79191 provision.go:143] copyHostCerts
	I0816 00:34:03.425624   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:03.425646   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:03.425717   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:03.425875   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:03.425888   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:03.425921   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:03.426007   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:03.426017   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:03.426045   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:03.426112   79191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098619 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-098619]
	I0816 00:34:03.509869   79191 provision.go:177] copyRemoteCerts
	I0816 00:34:03.509932   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:03.509961   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.512603   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.512938   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.512984   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.513163   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.513451   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.513617   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.513777   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:03.596330   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 00:34:03.621969   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:03.646778   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:03.671937   79191 provision.go:87] duration metric: took 252.867793ms to configureAuth
	I0816 00:34:03.671964   79191 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:03.672149   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:34:03.672250   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.675207   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675600   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.675625   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675787   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.676006   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676199   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676360   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.676549   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.676762   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.676779   79191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:03.945259   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:03.945287   79191 machine.go:96] duration metric: took 863.526642ms to provisionDockerMachine
	I0816 00:34:03.945298   79191 start.go:293] postStartSetup for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:34:03.945308   79191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:03.945335   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.945638   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:03.945666   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.948590   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.948967   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.948989   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.949152   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.949350   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.949491   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.949645   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.028994   79191 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:04.033776   79191 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:04.033799   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:04.033872   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:04.033943   79191 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:04.034033   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:04.045492   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:04.071879   79191 start.go:296] duration metric: took 126.569157ms for postStartSetup
	I0816 00:34:04.071920   79191 fix.go:56] duration metric: took 19.817260263s for fixHost
	I0816 00:34:04.071944   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.074942   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.075325   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075504   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.075699   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075846   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075977   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.076146   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:04.076319   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:04.076332   79191 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:04.178483   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768444.133390375
	
	I0816 00:34:04.178510   79191 fix.go:216] guest clock: 1723768444.133390375
	I0816 00:34:04.178519   79191 fix.go:229] Guest: 2024-08-16 00:34:04.133390375 +0000 UTC Remote: 2024-08-16 00:34:04.071925107 +0000 UTC m=+252.320651106 (delta=61.465268ms)
	I0816 00:34:04.178537   79191 fix.go:200] guest clock delta is within tolerance: 61.465268ms
	I0816 00:34:04.178541   79191 start.go:83] releasing machines lock for "old-k8s-version-098619", held for 19.923923778s
	I0816 00:34:04.178567   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.178875   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:04.181999   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182458   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.182490   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183192   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183357   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183412   79191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:04.183461   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.183553   79191 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:04.183575   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.186192   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186418   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186507   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186531   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186679   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.186811   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186836   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186850   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187016   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187032   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.187211   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187215   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.187364   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187488   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.283880   79191 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:04.289798   79191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:04.436822   79191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:04.443547   79191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:04.443631   79191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:04.464783   79191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:04.464807   79191 start.go:495] detecting cgroup driver to use...
	I0816 00:34:04.464873   79191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:04.481504   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:04.501871   79191 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:04.501942   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:04.521898   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:04.538186   79191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:04.704361   79191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:04.881682   79191 docker.go:233] disabling docker service ...
	I0816 00:34:04.881757   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:04.900264   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:04.916152   79191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:05.048440   79191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:05.166183   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:05.181888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:05.202525   79191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:34:05.202592   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.214655   79191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:05.214712   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.226052   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.236878   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.249217   79191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:05.260362   79191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:05.271039   79191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:05.271108   79191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:05.290423   79191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:05.307175   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:05.465815   79191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:05.640787   79191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:05.640878   79191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:05.646821   79191 start.go:563] Will wait 60s for crictl version
	I0816 00:34:05.646883   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:05.651455   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:05.698946   79191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:05.699037   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.729185   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.772063   79191 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:34:05.773406   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:05.776689   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777177   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:05.777241   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777435   79191 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:05.782377   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:05.797691   79191 kubeadm.go:883] updating cluster {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:05.797872   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:34:05.797953   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:05.861468   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:05.861557   79191 ssh_runner.go:195] Run: which lz4
	I0816 00:34:05.866880   79191 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:34:05.872036   79191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:34:05.872071   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:34:07.631328   79191 crio.go:462] duration metric: took 1.76448771s to copy over tarball
	I0816 00:34:07.631413   79191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:34:10.662435   79191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.030990355s)
	I0816 00:34:10.662472   79191 crio.go:469] duration metric: took 3.031115615s to extract the tarball
	I0816 00:34:10.662482   79191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:34:10.707627   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:10.745704   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:10.745742   79191 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.745838   79191 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.745914   79191 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.745860   79191 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.745943   79191 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.745884   79191 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.746059   79191 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747781   79191 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.747803   79191 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.747808   79191 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.747824   79191 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.747842   79191 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.747883   79191 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.747895   79191 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747948   79191 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.916488   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.923947   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.931668   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.942764   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.948555   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.957593   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.970039   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:34:11.012673   79191 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:34:11.012707   79191 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.012778   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.026267   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:11.135366   79191 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:34:11.135398   79191 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.135451   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.149180   79191 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:34:11.149226   79191 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.149271   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183480   79191 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:34:11.183526   79191 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.183526   79191 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:34:11.183578   79191 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.183584   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183637   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186513   79191 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:34:11.186559   79191 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.186622   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186632   79191 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:34:11.186658   79191 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:34:11.186699   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186722   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.252857   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.252914   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.252935   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.253007   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.253012   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.253083   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.253140   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420527   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.420559   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.420564   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.420638   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420732   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.420791   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.420813   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591141   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.591197   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.591267   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.591337   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.591418   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591453   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.591505   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:34:11.721234   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:34:11.725967   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:34:11.731189   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:34:11.731276   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:34:11.742195   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:34:11.742224   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:34:11.742265   79191 cache_images.go:92] duration metric: took 996.507737ms to LoadCachedImages
	W0816 00:34:11.742327   79191 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0816 00:34:11.742342   79191 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0816 00:34:11.742464   79191 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-098619 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:11.742546   79191 ssh_runner.go:195] Run: crio config
	I0816 00:34:11.791749   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:34:11.791779   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:11.791791   79191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:11.791810   79191 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098619 NodeName:old-k8s-version-098619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:34:11.791969   79191 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-098619"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:11.792046   79191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:34:11.802572   79191 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:11.802649   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:11.812583   79191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 00:34:11.831551   79191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:11.852476   79191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 00:34:11.875116   79191 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:11.879833   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:11.893308   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:12.038989   79191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:12.061736   79191 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619 for IP: 192.168.72.137
	I0816 00:34:12.061761   79191 certs.go:194] generating shared ca certs ...
	I0816 00:34:12.061780   79191 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.061992   79191 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:12.062046   79191 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:12.062059   79191 certs.go:256] generating profile certs ...
	I0816 00:34:12.062193   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key
	I0816 00:34:12.062283   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4
	I0816 00:34:12.062343   79191 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key
	I0816 00:34:12.062485   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:12.062523   79191 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:12.062536   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:12.062579   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:12.062614   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:12.062658   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:12.062721   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:12.063630   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:12.106539   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:12.139393   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:12.171548   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:12.213113   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 00:34:12.244334   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:34:12.287340   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:12.331047   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:34:12.369666   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:12.397260   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:12.424009   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:12.450212   79191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:12.471550   79191 ssh_runner.go:195] Run: openssl version
	I0816 00:34:12.479821   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:12.494855   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500546   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500620   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.508817   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:12.521689   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:12.533904   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538789   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538946   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.546762   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:12.561940   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:12.575852   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582377   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582457   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.590772   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:12.604976   79191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:12.610332   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:12.617070   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:12.625769   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:12.634342   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:12.641486   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:12.650090   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:12.658206   79191 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:12.658306   79191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:12.658392   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.703323   79191 cri.go:89] found id: ""
	I0816 00:34:12.703399   79191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:12.714950   79191 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:12.714970   79191 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:12.715047   79191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:12.727051   79191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:12.728059   79191 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:12.728655   79191 kubeconfig.go:62] /home/jenkins/minikube-integration/19452-12919/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098619" cluster setting kubeconfig missing "old-k8s-version-098619" context setting]
	I0816 00:34:12.729552   79191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.731269   79191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:12.744732   79191 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0816 00:34:12.744766   79191 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:12.744777   79191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:12.744833   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.783356   79191 cri.go:89] found id: ""
	I0816 00:34:12.783432   79191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:12.801942   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:12.816412   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:12.816433   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:12.816480   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:12.827686   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:12.827757   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:12.838063   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:12.847714   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:12.847808   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:12.858274   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.869328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:12.869389   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.881457   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:12.892256   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:12.892325   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:12.902115   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:12.912484   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.040145   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.851639   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.085396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.208430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.321003   79191 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:14.321084   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:14.822130   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.321780   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.822121   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:16.322077   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:16.821714   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.321166   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.821648   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.321711   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.821520   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.321732   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.821325   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.321783   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.821958   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:21.321139   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:21.822114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.321350   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.821541   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.322014   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.821938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.321883   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.821178   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.321881   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.821199   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:26.321573   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:26.821489   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.322094   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.321201   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.821854   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.321188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.821729   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.321316   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.821998   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:31.322184   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:31.821361   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.321205   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.822088   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.322126   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.821956   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.321921   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.821245   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.822034   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:36.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:36.821567   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.321329   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.822169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.321832   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.821404   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.321406   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.821914   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.322169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.821149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:41.322125   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:41.821459   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.321938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.822038   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.321447   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.821571   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.321428   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.821496   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:46.322149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:46.822140   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.321575   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.321365   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.822009   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.321536   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.821189   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.321387   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.821982   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:51.322075   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:51.822066   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.321534   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.821154   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.321256   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.821510   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.321984   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.821175   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.321601   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:56.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:56.821891   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.321266   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.821346   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.321718   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.821304   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.821302   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.821563   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:01.321323   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:01.821317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.321560   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.821707   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.322110   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.821327   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.321430   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.821935   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.321559   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.821373   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.821405   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.321781   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.821420   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.321483   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.821347   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.321167   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.821188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.821179   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:11.322114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:11.822105   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.321963   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.822172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.321805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.821971   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:14.321784   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:14.321882   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:14.360939   79191 cri.go:89] found id: ""
	I0816 00:35:14.360962   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.360971   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:14.360976   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:14.361028   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:14.397796   79191 cri.go:89] found id: ""
	I0816 00:35:14.397824   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.397836   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:14.397858   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:14.397922   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:14.433924   79191 cri.go:89] found id: ""
	I0816 00:35:14.433950   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.433960   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:14.433968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:14.434024   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:14.468657   79191 cri.go:89] found id: ""
	I0816 00:35:14.468685   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.468696   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:14.468704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:14.468770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:14.505221   79191 cri.go:89] found id: ""
	I0816 00:35:14.505247   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.505256   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:14.505264   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:14.505323   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:14.546032   79191 cri.go:89] found id: ""
	I0816 00:35:14.546062   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.546072   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:14.546079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:14.546147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:14.581260   79191 cri.go:89] found id: ""
	I0816 00:35:14.581284   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.581292   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:14.581298   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:14.581352   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:14.616103   79191 cri.go:89] found id: ""
	I0816 00:35:14.616127   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.616134   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:14.616142   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:14.616153   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:14.690062   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:14.690106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:14.735662   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:14.735699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:14.786049   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:14.786086   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:14.800375   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:14.800405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:14.931822   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:17.432686   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:17.448728   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:17.448806   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:17.496384   79191 cri.go:89] found id: ""
	I0816 00:35:17.496523   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.496568   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:17.496581   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:17.496646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:17.560779   79191 cri.go:89] found id: ""
	I0816 00:35:17.560810   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.560820   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:17.560829   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:17.560891   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:17.606007   79191 cri.go:89] found id: ""
	I0816 00:35:17.606036   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.606047   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:17.606054   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:17.606123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:17.639910   79191 cri.go:89] found id: ""
	I0816 00:35:17.639937   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.639945   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:17.639951   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:17.640030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:17.676534   79191 cri.go:89] found id: ""
	I0816 00:35:17.676563   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.676573   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:17.676581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:17.676645   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:17.716233   79191 cri.go:89] found id: ""
	I0816 00:35:17.716255   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.716262   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:17.716268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:17.716334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:17.753648   79191 cri.go:89] found id: ""
	I0816 00:35:17.753686   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.753696   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:17.753704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:17.753763   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:17.791670   79191 cri.go:89] found id: ""
	I0816 00:35:17.791694   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.791702   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:17.791711   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:17.791722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:17.840616   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:17.840650   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:17.854949   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:17.854981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:17.933699   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:17.933724   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:17.933750   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:18.010177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:18.010211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:20.551384   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:20.564463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:20.564540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:20.604361   79191 cri.go:89] found id: ""
	I0816 00:35:20.604389   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.604399   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:20.604405   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:20.604453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:20.639502   79191 cri.go:89] found id: ""
	I0816 00:35:20.639528   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.639535   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:20.639541   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:20.639590   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:20.676430   79191 cri.go:89] found id: ""
	I0816 00:35:20.676476   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.676484   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:20.676496   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:20.676551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:20.711213   79191 cri.go:89] found id: ""
	I0816 00:35:20.711243   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.711253   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:20.711261   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:20.711320   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:20.745533   79191 cri.go:89] found id: ""
	I0816 00:35:20.745563   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.745574   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:20.745581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:20.745644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:20.781031   79191 cri.go:89] found id: ""
	I0816 00:35:20.781056   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.781064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:20.781071   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:20.781119   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:20.819966   79191 cri.go:89] found id: ""
	I0816 00:35:20.819994   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.820005   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:20.820012   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:20.820096   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:20.859011   79191 cri.go:89] found id: ""
	I0816 00:35:20.859041   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.859052   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:20.859063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:20.859078   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:20.909479   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:20.909513   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:20.925627   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:20.925653   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:21.001707   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:21.001733   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:21.001747   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:21.085853   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:21.085893   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:23.626499   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:23.640337   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:23.640395   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:23.679422   79191 cri.go:89] found id: ""
	I0816 00:35:23.679449   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.679457   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:23.679463   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:23.679522   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:23.716571   79191 cri.go:89] found id: ""
	I0816 00:35:23.716594   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.716601   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:23.716607   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:23.716660   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:23.752539   79191 cri.go:89] found id: ""
	I0816 00:35:23.752563   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.752573   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:23.752581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:23.752640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:23.790665   79191 cri.go:89] found id: ""
	I0816 00:35:23.790693   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.790700   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:23.790707   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:23.790757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:23.827695   79191 cri.go:89] found id: ""
	I0816 00:35:23.827719   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.827727   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:23.827733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:23.827792   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:23.867664   79191 cri.go:89] found id: ""
	I0816 00:35:23.867687   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.867695   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:23.867701   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:23.867776   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:23.907844   79191 cri.go:89] found id: ""
	I0816 00:35:23.907871   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.907882   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:23.907890   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:23.907951   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:23.945372   79191 cri.go:89] found id: ""
	I0816 00:35:23.945403   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.945414   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:23.945424   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:23.945438   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:23.998270   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:23.998302   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:24.012794   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:24.012824   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:24.087285   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:24.087308   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:24.087340   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:24.167151   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:24.167184   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:26.710285   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:26.724394   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:26.724453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:26.764667   79191 cri.go:89] found id: ""
	I0816 00:35:26.764690   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.764698   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:26.764704   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:26.764756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:26.806631   79191 cri.go:89] found id: ""
	I0816 00:35:26.806660   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.806670   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:26.806677   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:26.806741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:26.843434   79191 cri.go:89] found id: ""
	I0816 00:35:26.843473   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.843485   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:26.843493   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:26.843576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:26.882521   79191 cri.go:89] found id: ""
	I0816 00:35:26.882556   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.882566   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:26.882574   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:26.882635   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:26.917956   79191 cri.go:89] found id: ""
	I0816 00:35:26.917985   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.917995   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:26.918004   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:26.918056   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:26.953168   79191 cri.go:89] found id: ""
	I0816 00:35:26.953191   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.953199   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:26.953205   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:26.953251   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:26.991366   79191 cri.go:89] found id: ""
	I0816 00:35:26.991397   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.991408   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:26.991416   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:26.991479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:27.028591   79191 cri.go:89] found id: ""
	I0816 00:35:27.028619   79191 logs.go:276] 0 containers: []
	W0816 00:35:27.028626   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:27.028635   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:27.028647   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:27.111613   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:27.111645   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:27.153539   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:27.153575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:27.209377   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:27.209420   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:27.223316   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:27.223343   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:27.301411   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:29.801803   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:29.815545   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:29.815626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:29.853638   79191 cri.go:89] found id: ""
	I0816 00:35:29.853668   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.853678   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:29.853687   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:29.853756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:29.892532   79191 cri.go:89] found id: ""
	I0816 00:35:29.892554   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.892561   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:29.892567   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:29.892622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:29.932486   79191 cri.go:89] found id: ""
	I0816 00:35:29.932511   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.932519   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:29.932524   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:29.932580   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:29.973161   79191 cri.go:89] found id: ""
	I0816 00:35:29.973194   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.973205   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:29.973213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:29.973275   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:30.009606   79191 cri.go:89] found id: ""
	I0816 00:35:30.009629   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.009637   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:30.009643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:30.009691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:30.045016   79191 cri.go:89] found id: ""
	I0816 00:35:30.045043   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.045050   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:30.045057   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:30.045113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:30.079934   79191 cri.go:89] found id: ""
	I0816 00:35:30.079959   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.079968   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:30.079974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:30.080030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:30.114173   79191 cri.go:89] found id: ""
	I0816 00:35:30.114199   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.114207   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:30.114216   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:30.114227   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:30.154765   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:30.154791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:30.204410   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:30.204442   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:30.218909   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:30.218934   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:30.294141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:30.294161   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:30.294193   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:32.872216   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:32.886211   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:32.886289   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:32.929416   79191 cri.go:89] found id: ""
	I0816 00:35:32.929440   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.929449   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:32.929456   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:32.929520   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:32.977862   79191 cri.go:89] found id: ""
	I0816 00:35:32.977887   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.977896   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:32.977920   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:32.977978   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:33.015569   79191 cri.go:89] found id: ""
	I0816 00:35:33.015593   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.015603   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:33.015622   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:33.015681   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:33.050900   79191 cri.go:89] found id: ""
	I0816 00:35:33.050934   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.050943   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:33.050959   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:33.051033   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:33.084529   79191 cri.go:89] found id: ""
	I0816 00:35:33.084556   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.084564   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:33.084569   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:33.084619   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:33.119819   79191 cri.go:89] found id: ""
	I0816 00:35:33.119845   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.119855   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:33.119863   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:33.119928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:33.159922   79191 cri.go:89] found id: ""
	I0816 00:35:33.159952   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.159959   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:33.159965   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:33.160023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:33.194977   79191 cri.go:89] found id: ""
	I0816 00:35:33.195006   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.195018   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:33.195030   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:33.195044   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:33.208578   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:33.208623   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:33.282177   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:33.282198   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:33.282211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:33.365514   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:33.365552   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:33.405190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:33.405226   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:35.959033   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:35.971866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:35.971934   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:36.008442   79191 cri.go:89] found id: ""
	I0816 00:35:36.008473   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.008483   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:36.008489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:36.008547   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:36.044346   79191 cri.go:89] found id: ""
	I0816 00:35:36.044374   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.044386   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:36.044393   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:36.044444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:36.083078   79191 cri.go:89] found id: ""
	I0816 00:35:36.083104   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.083112   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:36.083118   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:36.083166   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:36.120195   79191 cri.go:89] found id: ""
	I0816 00:35:36.120218   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.120226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:36.120232   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:36.120288   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:36.156186   79191 cri.go:89] found id: ""
	I0816 00:35:36.156215   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.156225   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:36.156233   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:36.156295   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:36.195585   79191 cri.go:89] found id: ""
	I0816 00:35:36.195613   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.195623   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:36.195631   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:36.195699   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:36.231110   79191 cri.go:89] found id: ""
	I0816 00:35:36.231133   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.231141   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:36.231147   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:36.231210   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:36.268745   79191 cri.go:89] found id: ""
	I0816 00:35:36.268770   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.268778   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:36.268786   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:36.268800   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:36.282225   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:36.282251   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:36.351401   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:36.351431   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:36.351447   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:36.429970   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:36.430003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:36.473745   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:36.473776   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:39.027444   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:39.041107   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:39.041170   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:39.079807   79191 cri.go:89] found id: ""
	I0816 00:35:39.079830   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.079837   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:39.079843   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:39.079890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:39.115532   79191 cri.go:89] found id: ""
	I0816 00:35:39.115559   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.115569   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:39.115576   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:39.115623   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:39.150197   79191 cri.go:89] found id: ""
	I0816 00:35:39.150222   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.150233   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:39.150241   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:39.150300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:39.186480   79191 cri.go:89] found id: ""
	I0816 00:35:39.186507   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.186515   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:39.186521   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:39.186572   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:39.221576   79191 cri.go:89] found id: ""
	I0816 00:35:39.221605   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.221615   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:39.221620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:39.221669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:39.259846   79191 cri.go:89] found id: ""
	I0816 00:35:39.259877   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.259888   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:39.259896   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:39.259950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:39.294866   79191 cri.go:89] found id: ""
	I0816 00:35:39.294891   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.294898   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:39.294903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:39.294952   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:39.329546   79191 cri.go:89] found id: ""
	I0816 00:35:39.329576   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.329584   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:39.329593   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:39.329604   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:39.371579   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:39.371609   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:39.422903   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:39.422935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:39.437673   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:39.437699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:39.515146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:39.515171   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:39.515185   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:42.101733   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:42.115563   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:42.115640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:42.155187   79191 cri.go:89] found id: ""
	I0816 00:35:42.155216   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.155224   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:42.155230   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:42.155282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:42.194414   79191 cri.go:89] found id: ""
	I0816 00:35:42.194444   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.194456   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:42.194464   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:42.194523   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:42.234219   79191 cri.go:89] found id: ""
	I0816 00:35:42.234245   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.234253   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:42.234259   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:42.234314   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:42.272278   79191 cri.go:89] found id: ""
	I0816 00:35:42.272304   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.272314   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:42.272322   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:42.272381   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:42.309973   79191 cri.go:89] found id: ""
	I0816 00:35:42.309999   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.310007   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:42.310013   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:42.310066   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:42.350745   79191 cri.go:89] found id: ""
	I0816 00:35:42.350773   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.350782   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:42.350790   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:42.350853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:42.387775   79191 cri.go:89] found id: ""
	I0816 00:35:42.387803   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.387813   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:42.387832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:42.387902   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:42.425086   79191 cri.go:89] found id: ""
	I0816 00:35:42.425110   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.425118   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:42.425125   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:42.425138   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:42.515543   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:42.515575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:42.558348   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:42.558372   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:42.613026   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:42.613059   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.628907   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:42.628932   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:42.710265   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.211083   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:45.225001   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:45.225083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:45.258193   79191 cri.go:89] found id: ""
	I0816 00:35:45.258223   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.258232   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:45.258240   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:45.258297   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:45.294255   79191 cri.go:89] found id: ""
	I0816 00:35:45.294278   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.294286   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:45.294291   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:45.294335   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:45.329827   79191 cri.go:89] found id: ""
	I0816 00:35:45.329875   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.329886   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:45.329894   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:45.329944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:45.366095   79191 cri.go:89] found id: ""
	I0816 00:35:45.366124   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.366134   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:45.366141   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:45.366202   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:45.402367   79191 cri.go:89] found id: ""
	I0816 00:35:45.402390   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.402398   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:45.402403   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:45.402449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:45.439272   79191 cri.go:89] found id: ""
	I0816 00:35:45.439293   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.439300   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:45.439310   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:45.439358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:45.474351   79191 cri.go:89] found id: ""
	I0816 00:35:45.474380   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.474388   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:45.474393   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:45.474445   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:45.519636   79191 cri.go:89] found id: ""
	I0816 00:35:45.519661   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.519671   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:45.519680   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:45.519695   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:45.593425   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.593446   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:45.593458   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:45.668058   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:45.668095   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:45.716090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:45.716125   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:45.774177   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:45.774207   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:48.288893   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:48.302256   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:48.302321   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:48.337001   79191 cri.go:89] found id: ""
	I0816 00:35:48.337030   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.337041   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:48.337048   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:48.337110   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:48.378341   79191 cri.go:89] found id: ""
	I0816 00:35:48.378367   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.378375   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:48.378384   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:48.378447   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:48.414304   79191 cri.go:89] found id: ""
	I0816 00:35:48.414383   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.414402   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:48.414410   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:48.414473   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:48.453946   79191 cri.go:89] found id: ""
	I0816 00:35:48.453969   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.453976   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:48.453982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:48.454036   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:48.489597   79191 cri.go:89] found id: ""
	I0816 00:35:48.489617   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.489623   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:48.489629   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:48.489672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:48.524195   79191 cri.go:89] found id: ""
	I0816 00:35:48.524222   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.524232   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:48.524239   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:48.524293   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:48.567854   79191 cri.go:89] found id: ""
	I0816 00:35:48.567880   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.567890   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:48.567897   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:48.567956   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:48.603494   79191 cri.go:89] found id: ""
	I0816 00:35:48.603520   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.603530   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:48.603540   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:48.603556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:48.642927   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:48.642960   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:48.693761   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:48.693791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:48.708790   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:48.708818   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:48.780072   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:48.780092   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:48.780106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.362108   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:51.376113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:51.376185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:51.413988   79191 cri.go:89] found id: ""
	I0816 00:35:51.414022   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.414033   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:51.414041   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:51.414101   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:51.460901   79191 cri.go:89] found id: ""
	I0816 00:35:51.460937   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.460948   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:51.460956   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:51.461019   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:51.497178   79191 cri.go:89] found id: ""
	I0816 00:35:51.497205   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.497215   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:51.497223   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:51.497365   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:51.534559   79191 cri.go:89] found id: ""
	I0816 00:35:51.534589   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.534600   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:51.534607   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:51.534668   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:51.570258   79191 cri.go:89] found id: ""
	I0816 00:35:51.570280   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.570287   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:51.570293   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:51.570356   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:51.609639   79191 cri.go:89] found id: ""
	I0816 00:35:51.609665   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.609675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:51.609683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:51.609742   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:51.645629   79191 cri.go:89] found id: ""
	I0816 00:35:51.645652   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.645659   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:51.645664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:51.645731   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:51.683325   79191 cri.go:89] found id: ""
	I0816 00:35:51.683344   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.683351   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:51.683358   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:51.683369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:51.739101   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:51.739133   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:51.753436   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:51.753466   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:51.831242   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:51.831268   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:51.831294   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.926924   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:51.926970   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:54.472667   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:54.486706   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:54.486785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:54.524180   79191 cri.go:89] found id: ""
	I0816 00:35:54.524203   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.524211   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:54.524216   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:54.524273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:54.563758   79191 cri.go:89] found id: ""
	I0816 00:35:54.563781   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.563788   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:54.563795   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:54.563859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:54.599442   79191 cri.go:89] found id: ""
	I0816 00:35:54.599471   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.599481   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:54.599488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:54.599553   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:54.633521   79191 cri.go:89] found id: ""
	I0816 00:35:54.633547   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.633558   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:54.633565   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:54.633628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:54.670036   79191 cri.go:89] found id: ""
	I0816 00:35:54.670064   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.670075   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:54.670083   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:54.670148   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:54.707565   79191 cri.go:89] found id: ""
	I0816 00:35:54.707587   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.707594   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:54.707600   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:54.707659   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:54.744500   79191 cri.go:89] found id: ""
	I0816 00:35:54.744530   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.744541   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:54.744548   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:54.744612   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:54.778964   79191 cri.go:89] found id: ""
	I0816 00:35:54.778988   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.778995   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:54.779007   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:54.779020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:54.831806   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:54.831838   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:54.845954   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:54.845979   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:54.921817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:54.921855   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:54.921871   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:55.006401   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:55.006439   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:57.548661   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:57.562489   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:57.562549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:57.597855   79191 cri.go:89] found id: ""
	I0816 00:35:57.597881   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.597891   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:57.597899   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:57.597961   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:57.634085   79191 cri.go:89] found id: ""
	I0816 00:35:57.634114   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.634126   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:57.634133   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:57.634193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:57.671748   79191 cri.go:89] found id: ""
	I0816 00:35:57.671779   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.671788   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:57.671795   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:57.671859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:57.708836   79191 cri.go:89] found id: ""
	I0816 00:35:57.708862   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.708870   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:57.708877   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:57.708940   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:57.744601   79191 cri.go:89] found id: ""
	I0816 00:35:57.744630   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.744639   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:57.744645   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:57.744706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:57.781888   79191 cri.go:89] found id: ""
	I0816 00:35:57.781919   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.781929   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:57.781937   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:57.781997   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:57.822612   79191 cri.go:89] found id: ""
	I0816 00:35:57.822634   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.822641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:57.822647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:57.822706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:57.873968   79191 cri.go:89] found id: ""
	I0816 00:35:57.873998   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.874008   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:57.874019   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:57.874037   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:57.896611   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:57.896643   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:57.995575   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:57.995597   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:57.995612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:58.077196   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:58.077230   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:58.116956   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:58.116985   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:00.664805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:00.678425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:00.678501   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:00.715522   79191 cri.go:89] found id: ""
	I0816 00:36:00.715548   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.715557   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:00.715562   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:00.715608   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:00.749892   79191 cri.go:89] found id: ""
	I0816 00:36:00.749920   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.749931   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:00.749938   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:00.750006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:00.787302   79191 cri.go:89] found id: ""
	I0816 00:36:00.787325   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.787332   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:00.787338   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:00.787392   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:00.821866   79191 cri.go:89] found id: ""
	I0816 00:36:00.821894   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.821906   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:00.821914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:00.821971   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:00.856346   79191 cri.go:89] found id: ""
	I0816 00:36:00.856369   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.856377   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:00.856382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:00.856431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:00.893569   79191 cri.go:89] found id: ""
	I0816 00:36:00.893596   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.893606   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:00.893614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:00.893677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:00.930342   79191 cri.go:89] found id: ""
	I0816 00:36:00.930367   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.930378   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:00.930386   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:00.930622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:00.966039   79191 cri.go:89] found id: ""
	I0816 00:36:00.966071   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.966085   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:00.966095   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:00.966109   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:01.045594   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:01.045631   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:01.089555   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:01.089586   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:01.141597   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:01.141633   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:01.156260   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:01.156286   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:01.230573   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:03.730825   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:03.744766   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:03.744838   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:03.781095   79191 cri.go:89] found id: ""
	I0816 00:36:03.781124   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.781142   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:03.781150   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:03.781215   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:03.815637   79191 cri.go:89] found id: ""
	I0816 00:36:03.815669   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.815680   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:03.815687   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:03.815741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:03.850076   79191 cri.go:89] found id: ""
	I0816 00:36:03.850110   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.850122   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:03.850130   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:03.850185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:03.888840   79191 cri.go:89] found id: ""
	I0816 00:36:03.888863   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.888872   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:03.888879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:03.888941   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:03.928317   79191 cri.go:89] found id: ""
	I0816 00:36:03.928341   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.928350   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:03.928359   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:03.928413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:03.964709   79191 cri.go:89] found id: ""
	I0816 00:36:03.964741   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.964751   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:03.964760   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:03.964830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:03.999877   79191 cri.go:89] found id: ""
	I0816 00:36:03.999902   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.999912   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:03.999919   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:03.999981   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:04.036772   79191 cri.go:89] found id: ""
	I0816 00:36:04.036799   79191 logs.go:276] 0 containers: []
	W0816 00:36:04.036810   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:04.036820   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:04.036833   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:04.118843   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:04.118879   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:04.162491   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:04.162548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:04.215100   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:04.215134   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:04.229043   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:04.229069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:04.307480   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:06.807640   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:06.821144   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:06.821203   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:06.857743   79191 cri.go:89] found id: ""
	I0816 00:36:06.857776   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.857786   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:06.857794   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:06.857872   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:06.895980   79191 cri.go:89] found id: ""
	I0816 00:36:06.896007   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.896018   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:06.896025   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:06.896090   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:06.935358   79191 cri.go:89] found id: ""
	I0816 00:36:06.935389   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.935399   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:06.935406   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:06.935461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:06.971533   79191 cri.go:89] found id: ""
	I0816 00:36:06.971561   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.971572   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:06.971580   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:06.971640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:07.007786   79191 cri.go:89] found id: ""
	I0816 00:36:07.007812   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.007823   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:07.007830   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:07.007890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:07.044060   79191 cri.go:89] found id: ""
	I0816 00:36:07.044092   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.044104   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:07.044112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:07.044185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:07.080058   79191 cri.go:89] found id: ""
	I0816 00:36:07.080085   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.080094   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:07.080101   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:07.080156   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:07.117749   79191 cri.go:89] found id: ""
	I0816 00:36:07.117773   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.117780   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:07.117787   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:07.117799   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:07.171418   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:07.171453   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:07.185520   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:07.185542   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:07.257817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:07.257872   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:07.257888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:07.339530   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:07.339576   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:09.882613   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:09.895873   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:09.895950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:09.936739   79191 cri.go:89] found id: ""
	I0816 00:36:09.936766   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.936774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:09.936780   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:09.936836   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:09.974145   79191 cri.go:89] found id: ""
	I0816 00:36:09.974168   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.974180   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:09.974186   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:09.974243   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:10.012166   79191 cri.go:89] found id: ""
	I0816 00:36:10.012196   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.012206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:10.012214   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:10.012265   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:10.051080   79191 cri.go:89] found id: ""
	I0816 00:36:10.051103   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.051111   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:10.051117   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:10.051176   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:10.088519   79191 cri.go:89] found id: ""
	I0816 00:36:10.088548   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.088559   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:10.088567   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:10.088628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:10.123718   79191 cri.go:89] found id: ""
	I0816 00:36:10.123744   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.123752   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:10.123758   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:10.123805   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:10.161900   79191 cri.go:89] found id: ""
	I0816 00:36:10.161922   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.161929   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:10.161995   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:10.162064   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:10.196380   79191 cri.go:89] found id: ""
	I0816 00:36:10.196408   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.196419   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:10.196429   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:10.196443   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:10.248276   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:10.248309   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:10.262241   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:10.262269   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:10.340562   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:10.340598   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:10.340626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:10.417547   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:10.417578   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:12.962310   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:12.976278   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:12.976338   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:13.014501   79191 cri.go:89] found id: ""
	I0816 00:36:13.014523   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.014530   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:13.014536   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:13.014587   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:13.055942   79191 cri.go:89] found id: ""
	I0816 00:36:13.055970   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.055979   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:13.055987   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:13.056048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:13.090309   79191 cri.go:89] found id: ""
	I0816 00:36:13.090336   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.090346   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:13.090354   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:13.090413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:13.124839   79191 cri.go:89] found id: ""
	I0816 00:36:13.124865   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.124876   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:13.124884   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:13.124945   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:13.164535   79191 cri.go:89] found id: ""
	I0816 00:36:13.164560   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.164567   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:13.164573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:13.164630   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:13.198651   79191 cri.go:89] found id: ""
	I0816 00:36:13.198699   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.198710   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:13.198718   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:13.198785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:13.233255   79191 cri.go:89] found id: ""
	I0816 00:36:13.233278   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.233286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:13.233292   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:13.233348   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:13.267327   79191 cri.go:89] found id: ""
	I0816 00:36:13.267351   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.267359   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:13.267367   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:13.267384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:13.352053   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:13.352089   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:13.393438   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:13.393471   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:13.445397   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:13.445430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:13.459143   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:13.459177   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:13.530160   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.031296   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:16.045557   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:16.045618   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:16.081828   79191 cri.go:89] found id: ""
	I0816 00:36:16.081871   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.081882   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:16.081890   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:16.081949   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:16.116228   79191 cri.go:89] found id: ""
	I0816 00:36:16.116254   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.116264   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:16.116272   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:16.116334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:16.150051   79191 cri.go:89] found id: ""
	I0816 00:36:16.150079   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.150087   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:16.150093   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:16.150139   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:16.186218   79191 cri.go:89] found id: ""
	I0816 00:36:16.186241   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.186248   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:16.186254   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:16.186301   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:16.223223   79191 cri.go:89] found id: ""
	I0816 00:36:16.223255   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.223263   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:16.223270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:16.223316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:16.259929   79191 cri.go:89] found id: ""
	I0816 00:36:16.259953   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.259960   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:16.259970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:16.260099   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:16.294611   79191 cri.go:89] found id: ""
	I0816 00:36:16.294633   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.294641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:16.294649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:16.294725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:16.333492   79191 cri.go:89] found id: ""
	I0816 00:36:16.333523   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.333533   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:16.333544   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:16.333563   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:16.385970   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:16.386002   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:16.400359   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:16.400384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:16.471363   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.471388   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:16.471408   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:16.555990   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:16.556022   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:19.099502   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:19.112649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:19.112706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:19.145809   79191 cri.go:89] found id: ""
	I0816 00:36:19.145837   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.145858   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:19.145865   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:19.145928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:19.183737   79191 cri.go:89] found id: ""
	I0816 00:36:19.183763   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.183774   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:19.183781   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:19.183841   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:19.219729   79191 cri.go:89] found id: ""
	I0816 00:36:19.219756   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.219764   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:19.219770   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:19.219815   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:19.254450   79191 cri.go:89] found id: ""
	I0816 00:36:19.254474   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.254481   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:19.254488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:19.254540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:19.289543   79191 cri.go:89] found id: ""
	I0816 00:36:19.289573   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.289585   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:19.289592   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:19.289651   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:19.330727   79191 cri.go:89] found id: ""
	I0816 00:36:19.330748   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.330756   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:19.330762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:19.330809   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:19.368952   79191 cri.go:89] found id: ""
	I0816 00:36:19.368978   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.368986   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:19.368992   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:19.369048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:19.406211   79191 cri.go:89] found id: ""
	I0816 00:36:19.406247   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.406258   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:19.406268   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:19.406282   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:19.457996   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:19.458032   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:19.472247   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:19.472274   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:19.542840   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:19.542862   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:19.542876   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:19.624478   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:19.624520   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:22.165884   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:22.180005   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:22.180078   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:22.217434   79191 cri.go:89] found id: ""
	I0816 00:36:22.217463   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.217471   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:22.217478   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:22.217534   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:22.250679   79191 cri.go:89] found id: ""
	I0816 00:36:22.250708   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.250717   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:22.250725   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:22.250785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:22.284294   79191 cri.go:89] found id: ""
	I0816 00:36:22.284324   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.284334   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:22.284341   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:22.284403   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:22.320747   79191 cri.go:89] found id: ""
	I0816 00:36:22.320779   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.320790   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:22.320799   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:22.320858   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:22.355763   79191 cri.go:89] found id: ""
	I0816 00:36:22.355793   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.355803   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:22.355811   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:22.355871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:22.392762   79191 cri.go:89] found id: ""
	I0816 00:36:22.392788   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.392796   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:22.392802   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:22.392860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:22.426577   79191 cri.go:89] found id: ""
	I0816 00:36:22.426605   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.426614   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:22.426621   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:22.426682   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:22.459989   79191 cri.go:89] found id: ""
	I0816 00:36:22.460018   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.460030   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:22.460040   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:22.460054   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:22.545782   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:22.545820   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:22.587404   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:22.587431   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:22.638519   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:22.638559   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:22.653064   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:22.653087   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:22.734333   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.234823   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:25.248716   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:25.248787   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:25.284760   79191 cri.go:89] found id: ""
	I0816 00:36:25.284786   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.284793   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:25.284799   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:25.284870   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:25.325523   79191 cri.go:89] found id: ""
	I0816 00:36:25.325548   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.325556   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:25.325562   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:25.325621   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:25.365050   79191 cri.go:89] found id: ""
	I0816 00:36:25.365078   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.365088   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:25.365096   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:25.365155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:25.405005   79191 cri.go:89] found id: ""
	I0816 00:36:25.405038   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.405049   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:25.405062   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:25.405121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:25.444622   79191 cri.go:89] found id: ""
	I0816 00:36:25.444648   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.444656   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:25.444662   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:25.444710   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:25.485364   79191 cri.go:89] found id: ""
	I0816 00:36:25.485394   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.485404   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:25.485413   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:25.485492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:25.521444   79191 cri.go:89] found id: ""
	I0816 00:36:25.521471   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.521482   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:25.521490   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:25.521550   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:25.556763   79191 cri.go:89] found id: ""
	I0816 00:36:25.556789   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.556796   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:25.556805   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:25.556817   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:25.606725   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:25.606759   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:25.623080   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:25.623108   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:25.705238   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.705258   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:25.705280   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:25.782188   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:25.782224   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:28.325018   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:28.337778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:28.337860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:28.378452   79191 cri.go:89] found id: ""
	I0816 00:36:28.378482   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.378492   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:28.378499   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:28.378556   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:28.412103   79191 cri.go:89] found id: ""
	I0816 00:36:28.412132   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.412143   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:28.412150   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:28.412214   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:28.447363   79191 cri.go:89] found id: ""
	I0816 00:36:28.447388   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.447396   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:28.447401   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:28.447452   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:28.481199   79191 cri.go:89] found id: ""
	I0816 00:36:28.481228   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.481242   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:28.481251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:28.481305   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:28.517523   79191 cri.go:89] found id: ""
	I0816 00:36:28.517545   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.517552   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:28.517558   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:28.517620   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:28.552069   79191 cri.go:89] found id: ""
	I0816 00:36:28.552101   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.552112   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:28.552120   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:28.552193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:28.594124   79191 cri.go:89] found id: ""
	I0816 00:36:28.594148   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.594158   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:28.594166   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:28.594228   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:28.631451   79191 cri.go:89] found id: ""
	I0816 00:36:28.631472   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.631480   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:28.631488   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:28.631498   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:28.685335   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:28.685368   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:28.700852   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:28.700877   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:28.773932   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:28.773957   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:28.773972   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:28.848951   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:28.848989   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:31.389208   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:31.403731   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:31.403813   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:31.440979   79191 cri.go:89] found id: ""
	I0816 00:36:31.441010   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.441020   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:31.441028   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:31.441092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:31.476435   79191 cri.go:89] found id: ""
	I0816 00:36:31.476458   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.476465   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:31.476471   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:31.476530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:31.514622   79191 cri.go:89] found id: ""
	I0816 00:36:31.514644   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.514651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:31.514657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:31.514715   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:31.554503   79191 cri.go:89] found id: ""
	I0816 00:36:31.554533   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.554543   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:31.554551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:31.554609   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:31.590283   79191 cri.go:89] found id: ""
	I0816 00:36:31.590317   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.590325   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:31.590332   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:31.590380   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:31.625969   79191 cri.go:89] found id: ""
	I0816 00:36:31.626003   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.626014   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:31.626031   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:31.626102   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:31.660489   79191 cri.go:89] found id: ""
	I0816 00:36:31.660513   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.660520   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:31.660526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:31.660583   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:31.694728   79191 cri.go:89] found id: ""
	I0816 00:36:31.694761   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.694769   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:31.694779   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:31.694790   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:31.760631   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:31.760663   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:31.774858   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:31.774886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:31.851125   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:31.851145   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:31.851156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:31.934491   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:31.934521   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:34.476368   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:34.489252   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:34.489308   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:34.524932   79191 cri.go:89] found id: ""
	I0816 00:36:34.524964   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.524972   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:34.524977   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:34.525032   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:34.559434   79191 cri.go:89] found id: ""
	I0816 00:36:34.559462   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.559473   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:34.559481   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:34.559543   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:34.598700   79191 cri.go:89] found id: ""
	I0816 00:36:34.598728   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.598739   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:34.598747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:34.598808   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:34.632413   79191 cri.go:89] found id: ""
	I0816 00:36:34.632438   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.632448   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:34.632456   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:34.632514   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:34.668385   79191 cri.go:89] found id: ""
	I0816 00:36:34.668409   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.668418   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:34.668425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:34.668486   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:34.703728   79191 cri.go:89] found id: ""
	I0816 00:36:34.703754   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.703764   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:34.703772   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:34.703832   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:34.743119   79191 cri.go:89] found id: ""
	I0816 00:36:34.743152   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.743161   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:34.743171   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:34.743230   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:34.778932   79191 cri.go:89] found id: ""
	I0816 00:36:34.778955   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.778963   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:34.778971   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:34.778987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:34.832050   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:34.832084   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:34.845700   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:34.845728   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:34.917535   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:34.917554   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:34.917565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:35.005262   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:35.005295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:37.547107   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:37.562035   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:37.562095   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:37.605992   79191 cri.go:89] found id: ""
	I0816 00:36:37.606021   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.606028   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:37.606035   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:37.606092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:37.642613   79191 cri.go:89] found id: ""
	I0816 00:36:37.642642   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.642653   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:37.642660   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:37.642708   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:37.677810   79191 cri.go:89] found id: ""
	I0816 00:36:37.677863   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.677875   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:37.677883   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:37.677939   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:37.714490   79191 cri.go:89] found id: ""
	I0816 00:36:37.714514   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.714522   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:37.714529   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:37.714575   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:37.750807   79191 cri.go:89] found id: ""
	I0816 00:36:37.750837   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.750844   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:37.750850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:37.750912   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:37.790307   79191 cri.go:89] found id: ""
	I0816 00:36:37.790337   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.790347   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:37.790355   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:37.790404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:37.826811   79191 cri.go:89] found id: ""
	I0816 00:36:37.826838   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.826848   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:37.826856   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:37.826920   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:37.862066   79191 cri.go:89] found id: ""
	I0816 00:36:37.862091   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.862101   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:37.862112   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:37.862127   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:37.917127   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:37.917161   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:37.932986   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:37.933024   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:38.008715   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:38.008739   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:38.008754   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:38.088744   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:38.088778   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:40.643426   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:40.659064   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:40.659128   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:40.702486   79191 cri.go:89] found id: ""
	I0816 00:36:40.702513   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.702523   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:40.702530   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:40.702595   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:40.736016   79191 cri.go:89] found id: ""
	I0816 00:36:40.736044   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.736057   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:40.736064   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:40.736125   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:40.779665   79191 cri.go:89] found id: ""
	I0816 00:36:40.779704   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.779724   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:40.779733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:40.779795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:40.818612   79191 cri.go:89] found id: ""
	I0816 00:36:40.818633   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.818640   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:40.818647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:40.818695   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:40.855990   79191 cri.go:89] found id: ""
	I0816 00:36:40.856014   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.856021   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:40.856027   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:40.856074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:40.894792   79191 cri.go:89] found id: ""
	I0816 00:36:40.894827   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.894836   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:40.894845   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:40.894894   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:40.932233   79191 cri.go:89] found id: ""
	I0816 00:36:40.932255   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.932263   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:40.932268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:40.932324   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:40.974601   79191 cri.go:89] found id: ""
	I0816 00:36:40.974624   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.974633   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:40.974642   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:40.974660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:41.049185   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:41.049209   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:41.049223   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:41.129446   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:41.129481   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:41.170312   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:41.170341   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:41.226217   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:41.226254   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:43.741485   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:43.756248   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:43.756325   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:43.792440   79191 cri.go:89] found id: ""
	I0816 00:36:43.792469   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.792480   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:43.792488   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:43.792549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:43.829906   79191 cri.go:89] found id: ""
	I0816 00:36:43.829933   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.829941   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:43.829947   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:43.830003   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:43.880305   79191 cri.go:89] found id: ""
	I0816 00:36:43.880330   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.880337   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:43.880343   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:43.880399   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:43.937899   79191 cri.go:89] found id: ""
	I0816 00:36:43.937929   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.937939   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:43.937953   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:43.938023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:43.997578   79191 cri.go:89] found id: ""
	I0816 00:36:43.997603   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.997610   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:43.997620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:43.997672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:44.035606   79191 cri.go:89] found id: ""
	I0816 00:36:44.035629   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.035637   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:44.035643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:44.035692   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:44.072919   79191 cri.go:89] found id: ""
	I0816 00:36:44.072950   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.072961   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:44.072968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:44.073043   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:44.108629   79191 cri.go:89] found id: ""
	I0816 00:36:44.108659   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.108681   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:44.108692   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:44.108705   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:44.149127   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:44.149151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:44.201694   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:44.201737   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:44.217161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:44.217199   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:44.284335   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:44.284362   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:44.284379   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:46.869196   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:46.883519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:46.883584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:46.924767   79191 cri.go:89] found id: ""
	I0816 00:36:46.924806   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.924821   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:46.924829   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:46.924889   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:46.963282   79191 cri.go:89] found id: ""
	I0816 00:36:46.963309   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.963320   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:46.963327   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:46.963389   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:47.001421   79191 cri.go:89] found id: ""
	I0816 00:36:47.001450   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.001458   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:47.001463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:47.001518   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:47.037679   79191 cri.go:89] found id: ""
	I0816 00:36:47.037702   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.037713   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:47.037720   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:47.037778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:47.078009   79191 cri.go:89] found id: ""
	I0816 00:36:47.078039   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.078050   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:47.078056   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:47.078113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:47.119032   79191 cri.go:89] found id: ""
	I0816 00:36:47.119056   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.119064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:47.119069   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:47.119127   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:47.154893   79191 cri.go:89] found id: ""
	I0816 00:36:47.154919   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.154925   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:47.154933   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:47.154993   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:47.194544   79191 cri.go:89] found id: ""
	I0816 00:36:47.194571   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.194582   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:47.194592   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:47.194612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:47.267148   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:47.267172   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:47.267186   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:47.345257   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:47.345295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:47.386207   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:47.386233   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:47.436171   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:47.436201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:49.949977   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:49.965702   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:49.965761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:50.002443   79191 cri.go:89] found id: ""
	I0816 00:36:50.002470   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.002481   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:50.002489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:50.002548   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:50.039123   79191 cri.go:89] found id: ""
	I0816 00:36:50.039155   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.039162   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:50.039168   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:50.039220   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:50.074487   79191 cri.go:89] found id: ""
	I0816 00:36:50.074517   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.074527   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:50.074535   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:50.074593   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:50.108980   79191 cri.go:89] found id: ""
	I0816 00:36:50.109008   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.109018   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:50.109025   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:50.109082   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:50.149182   79191 cri.go:89] found id: ""
	I0816 00:36:50.149202   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.149209   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:50.149215   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:50.149261   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:50.183066   79191 cri.go:89] found id: ""
	I0816 00:36:50.183094   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.183102   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:50.183108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:50.183165   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:50.220200   79191 cri.go:89] found id: ""
	I0816 00:36:50.220231   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.220240   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:50.220246   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:50.220302   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:50.258059   79191 cri.go:89] found id: ""
	I0816 00:36:50.258083   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.258092   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:50.258100   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:50.258110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:50.300560   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:50.300591   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:50.350548   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:50.350581   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:50.364792   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:50.364816   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:50.437723   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:50.437746   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:50.437761   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:53.015846   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:53.029184   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:53.029246   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:53.064306   79191 cri.go:89] found id: ""
	I0816 00:36:53.064338   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.064346   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:53.064352   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:53.064404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:53.104425   79191 cri.go:89] found id: ""
	I0816 00:36:53.104458   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.104468   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:53.104476   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:53.104538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:53.139470   79191 cri.go:89] found id: ""
	I0816 00:36:53.139493   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.139500   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:53.139506   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:53.139551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:53.185195   79191 cri.go:89] found id: ""
	I0816 00:36:53.185225   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.185234   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:53.185242   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:53.185300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:53.221897   79191 cri.go:89] found id: ""
	I0816 00:36:53.221925   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.221935   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:53.221943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:53.222006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:53.258810   79191 cri.go:89] found id: ""
	I0816 00:36:53.258841   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.258852   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:53.258859   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:53.258924   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:53.298672   79191 cri.go:89] found id: ""
	I0816 00:36:53.298701   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.298711   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:53.298719   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:53.298778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:53.333498   79191 cri.go:89] found id: ""
	I0816 00:36:53.333520   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.333527   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:53.333535   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:53.333548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.370495   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:53.370530   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:53.423938   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:53.423982   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:53.438897   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:53.438926   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:53.505951   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:53.505973   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:53.505987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.089638   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:56.103832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:56.103893   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:56.148010   79191 cri.go:89] found id: ""
	I0816 00:36:56.148038   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.148048   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:56.148057   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:56.148120   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:56.185631   79191 cri.go:89] found id: ""
	I0816 00:36:56.185663   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.185673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:56.185680   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:56.185739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:56.222064   79191 cri.go:89] found id: ""
	I0816 00:36:56.222093   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.222104   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:56.222112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:56.222162   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:56.260462   79191 cri.go:89] found id: ""
	I0816 00:36:56.260494   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.260504   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:56.260513   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:56.260574   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:56.296125   79191 cri.go:89] found id: ""
	I0816 00:36:56.296154   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.296164   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:56.296172   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:56.296236   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:56.333278   79191 cri.go:89] found id: ""
	I0816 00:36:56.333305   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.333316   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:56.333324   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:56.333385   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:56.368924   79191 cri.go:89] found id: ""
	I0816 00:36:56.368952   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.368962   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:56.368970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:56.369034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:56.407148   79191 cri.go:89] found id: ""
	I0816 00:36:56.407180   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.407190   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:56.407201   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:56.407215   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:56.464745   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:56.464779   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:56.478177   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:56.478204   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:56.555827   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:56.555851   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:56.555864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.640001   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:56.640040   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:59.181423   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:59.195722   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:59.195804   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:59.232043   79191 cri.go:89] found id: ""
	I0816 00:36:59.232067   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.232075   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:59.232081   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:59.232132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:59.270628   79191 cri.go:89] found id: ""
	I0816 00:36:59.270656   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.270673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:59.270681   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:59.270743   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:59.304054   79191 cri.go:89] found id: ""
	I0816 00:36:59.304089   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.304100   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:59.304108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:59.304169   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:59.339386   79191 cri.go:89] found id: ""
	I0816 00:36:59.339410   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.339417   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:59.339423   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:59.339483   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:59.381313   79191 cri.go:89] found id: ""
	I0816 00:36:59.381361   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.381376   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:59.381385   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:59.381449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:59.417060   79191 cri.go:89] found id: ""
	I0816 00:36:59.417090   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.417101   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:59.417109   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:59.417160   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:59.461034   79191 cri.go:89] found id: ""
	I0816 00:36:59.461060   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.461071   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:59.461078   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:59.461136   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:59.496248   79191 cri.go:89] found id: ""
	I0816 00:36:59.496276   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.496286   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:59.496297   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:59.496312   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:59.566779   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:59.566803   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:59.566829   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:59.651999   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:59.652034   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:59.693286   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:59.693310   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:59.746677   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:59.746711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:02.262527   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:02.277903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:02.277965   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:02.323846   79191 cri.go:89] found id: ""
	I0816 00:37:02.323868   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.323876   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:02.323882   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:02.323938   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:02.359552   79191 cri.go:89] found id: ""
	I0816 00:37:02.359578   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.359589   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:02.359596   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:02.359657   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:02.395062   79191 cri.go:89] found id: ""
	I0816 00:37:02.395087   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.395094   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:02.395100   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:02.395155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:02.432612   79191 cri.go:89] found id: ""
	I0816 00:37:02.432636   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.432646   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:02.432654   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:02.432712   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:02.468612   79191 cri.go:89] found id: ""
	I0816 00:37:02.468640   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.468651   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:02.468659   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:02.468716   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:02.514472   79191 cri.go:89] found id: ""
	I0816 00:37:02.514500   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.514511   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:02.514519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:02.514576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:02.551964   79191 cri.go:89] found id: ""
	I0816 00:37:02.551993   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.552003   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:02.552011   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:02.552061   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:02.588018   79191 cri.go:89] found id: ""
	I0816 00:37:02.588044   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.588053   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:02.588063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:02.588081   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:02.638836   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:02.638875   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:02.653581   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:02.653613   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:02.737018   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.737047   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:02.737065   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:02.819726   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:02.819763   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.364943   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:05.379433   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:05.379492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:05.419165   79191 cri.go:89] found id: ""
	I0816 00:37:05.419191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.419198   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:05.419204   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:05.419264   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:05.454417   79191 cri.go:89] found id: ""
	I0816 00:37:05.454438   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.454446   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:05.454452   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:05.454497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:05.490162   79191 cri.go:89] found id: ""
	I0816 00:37:05.490191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.490203   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:05.490210   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:05.490268   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:05.527303   79191 cri.go:89] found id: ""
	I0816 00:37:05.527327   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.527334   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:05.527340   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:05.527393   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:05.562271   79191 cri.go:89] found id: ""
	I0816 00:37:05.562302   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.562310   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:05.562316   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:05.562374   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:05.597800   79191 cri.go:89] found id: ""
	I0816 00:37:05.597823   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.597830   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:05.597837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:05.597905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:05.633996   79191 cri.go:89] found id: ""
	I0816 00:37:05.634021   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.634028   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:05.634034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:05.634088   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:05.672408   79191 cri.go:89] found id: ""
	I0816 00:37:05.672437   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.672446   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:05.672457   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:05.672472   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:05.750956   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:05.750995   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.795573   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:05.795603   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:05.848560   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:05.848593   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:05.862245   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:05.862268   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:05.938704   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:08.439692   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:08.452850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:08.452927   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:08.490015   79191 cri.go:89] found id: ""
	I0816 00:37:08.490043   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.490053   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:08.490060   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:08.490121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:08.529631   79191 cri.go:89] found id: ""
	I0816 00:37:08.529665   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.529676   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:08.529689   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:08.529747   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:08.564858   79191 cri.go:89] found id: ""
	I0816 00:37:08.564885   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.564896   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:08.564904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:08.564966   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:08.601144   79191 cri.go:89] found id: ""
	I0816 00:37:08.601180   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.601190   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:08.601200   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:08.601257   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:08.637050   79191 cri.go:89] found id: ""
	I0816 00:37:08.637081   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.637090   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:08.637098   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:08.637158   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:08.670613   79191 cri.go:89] found id: ""
	I0816 00:37:08.670644   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.670655   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:08.670663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:08.670727   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:08.704664   79191 cri.go:89] found id: ""
	I0816 00:37:08.704690   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.704698   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:08.704704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:08.704754   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:08.741307   79191 cri.go:89] found id: ""
	I0816 00:37:08.741337   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.741348   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:08.741360   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:08.741374   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:08.755434   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:08.755459   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:08.828118   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:08.828140   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:08.828151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:08.911565   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:08.911605   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:08.954907   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:08.954937   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.508848   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:11.521998   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:11.522060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:11.558581   79191 cri.go:89] found id: ""
	I0816 00:37:11.558611   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.558622   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:11.558630   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:11.558697   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:11.593798   79191 cri.go:89] found id: ""
	I0816 00:37:11.593822   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.593830   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:11.593836   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:11.593905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:11.629619   79191 cri.go:89] found id: ""
	I0816 00:37:11.629648   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.629658   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:11.629664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:11.629717   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:11.666521   79191 cri.go:89] found id: ""
	I0816 00:37:11.666548   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.666556   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:11.666562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:11.666607   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:11.703374   79191 cri.go:89] found id: ""
	I0816 00:37:11.703406   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.703417   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:11.703427   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:11.703491   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:11.739374   79191 cri.go:89] found id: ""
	I0816 00:37:11.739403   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.739413   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:11.739420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:11.739475   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:11.774981   79191 cri.go:89] found id: ""
	I0816 00:37:11.775006   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.775013   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:11.775019   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:11.775074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:11.809561   79191 cri.go:89] found id: ""
	I0816 00:37:11.809590   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.809601   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:11.809612   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:11.809626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.863071   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:11.863116   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:11.878161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:11.878191   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:11.953572   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.953594   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:11.953608   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:12.035815   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:12.035848   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:14.576547   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:14.590747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:14.590802   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:14.626732   79191 cri.go:89] found id: ""
	I0816 00:37:14.626762   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.626774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:14.626781   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:14.626833   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:14.662954   79191 cri.go:89] found id: ""
	I0816 00:37:14.662978   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.662988   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:14.662996   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:14.663057   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:14.697618   79191 cri.go:89] found id: ""
	I0816 00:37:14.697646   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.697656   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:14.697663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:14.697725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:14.735137   79191 cri.go:89] found id: ""
	I0816 00:37:14.735161   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.735168   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:14.735174   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:14.735222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:14.770625   79191 cri.go:89] found id: ""
	I0816 00:37:14.770648   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.770655   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:14.770660   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:14.770718   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:14.808678   79191 cri.go:89] found id: ""
	I0816 00:37:14.808708   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.808718   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:14.808726   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:14.808795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:14.847321   79191 cri.go:89] found id: ""
	I0816 00:37:14.847349   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.847360   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:14.847368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:14.847425   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:14.886110   79191 cri.go:89] found id: ""
	I0816 00:37:14.886136   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.886147   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:14.886156   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:14.886175   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:14.971978   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:14.972013   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:15.015620   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:15.015644   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:15.067372   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:15.067405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:15.081629   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:15.081652   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:15.151580   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:17.652362   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:17.666201   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:17.666278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:17.698723   79191 cri.go:89] found id: ""
	I0816 00:37:17.698760   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.698772   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:17.698778   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:17.698827   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:17.732854   79191 cri.go:89] found id: ""
	I0816 00:37:17.732883   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.732893   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:17.732901   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:17.732957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:17.767665   79191 cri.go:89] found id: ""
	I0816 00:37:17.767691   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.767701   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:17.767709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:17.767769   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:17.801490   79191 cri.go:89] found id: ""
	I0816 00:37:17.801512   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.801520   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:17.801526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:17.801579   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:17.837451   79191 cri.go:89] found id: ""
	I0816 00:37:17.837479   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.837490   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:17.837498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:17.837562   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:17.872898   79191 cri.go:89] found id: ""
	I0816 00:37:17.872924   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.872934   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:17.872943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:17.873002   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:17.910325   79191 cri.go:89] found id: ""
	I0816 00:37:17.910352   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.910362   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:17.910370   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:17.910431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:17.946885   79191 cri.go:89] found id: ""
	I0816 00:37:17.946909   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.946916   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:17.946923   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:17.946935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:18.014011   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:18.014045   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:18.028850   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:18.028886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:18.099362   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:18.099381   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:18.099396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:18.180552   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:18.180588   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:20.720810   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:20.733806   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:20.733887   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:20.771300   79191 cri.go:89] found id: ""
	I0816 00:37:20.771323   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.771330   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:20.771336   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:20.771394   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:20.812327   79191 cri.go:89] found id: ""
	I0816 00:37:20.812355   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.812362   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:20.812369   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:20.812430   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:20.846830   79191 cri.go:89] found id: ""
	I0816 00:37:20.846861   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.846872   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:20.846879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:20.846948   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:20.889979   79191 cri.go:89] found id: ""
	I0816 00:37:20.890005   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.890015   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:20.890023   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:20.890086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:20.933732   79191 cri.go:89] found id: ""
	I0816 00:37:20.933762   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.933772   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:20.933778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:20.933824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:20.972341   79191 cri.go:89] found id: ""
	I0816 00:37:20.972368   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.972376   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:20.972382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:20.972444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:21.011179   79191 cri.go:89] found id: ""
	I0816 00:37:21.011207   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.011216   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:21.011224   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:21.011282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:21.045645   79191 cri.go:89] found id: ""
	I0816 00:37:21.045668   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.045675   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:21.045684   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:21.045694   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:21.099289   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:21.099321   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:21.113814   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:21.113858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:21.186314   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:21.186337   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:21.186355   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:21.271116   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:21.271152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:23.818598   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:23.832330   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:23.832387   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:23.869258   79191 cri.go:89] found id: ""
	I0816 00:37:23.869279   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.869286   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:23.869293   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:23.869342   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:23.903958   79191 cri.go:89] found id: ""
	I0816 00:37:23.903989   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.903999   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:23.904006   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:23.904060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:23.943110   79191 cri.go:89] found id: ""
	I0816 00:37:23.943142   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.943153   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:23.943160   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:23.943222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:23.979325   79191 cri.go:89] found id: ""
	I0816 00:37:23.979356   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.979366   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:23.979374   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:23.979435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:24.017570   79191 cri.go:89] found id: ""
	I0816 00:37:24.017597   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.017607   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:24.017614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:24.017684   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:24.051522   79191 cri.go:89] found id: ""
	I0816 00:37:24.051546   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.051555   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:24.051562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:24.051626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:24.087536   79191 cri.go:89] found id: ""
	I0816 00:37:24.087561   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.087572   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:24.087579   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:24.087644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:24.123203   79191 cri.go:89] found id: ""
	I0816 00:37:24.123233   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.123245   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:24.123256   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:24.123276   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:24.178185   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:24.178225   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:24.192895   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:24.192920   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:24.273471   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:24.273492   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:24.273504   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:24.357890   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:24.357936   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:26.950399   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:26.964347   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:26.964406   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:27.004694   79191 cri.go:89] found id: ""
	I0816 00:37:27.004722   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.004738   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:27.004745   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:27.004800   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:27.040051   79191 cri.go:89] found id: ""
	I0816 00:37:27.040080   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.040090   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:27.040096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:27.040144   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:27.088614   79191 cri.go:89] found id: ""
	I0816 00:37:27.088642   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.088651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:27.088657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:27.088732   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:27.125427   79191 cri.go:89] found id: ""
	I0816 00:37:27.125450   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.125457   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:27.125464   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:27.125511   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:27.158562   79191 cri.go:89] found id: ""
	I0816 00:37:27.158592   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.158602   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:27.158609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:27.158672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:27.192986   79191 cri.go:89] found id: ""
	I0816 00:37:27.193015   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.193026   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:27.193034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:27.193091   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:27.228786   79191 cri.go:89] found id: ""
	I0816 00:37:27.228828   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.228847   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:27.228858   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:27.228921   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:27.262776   79191 cri.go:89] found id: ""
	I0816 00:37:27.262808   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.262819   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:27.262829   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:27.262844   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:27.276444   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:27.276470   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:27.349918   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:27.349946   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:27.349958   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:27.435030   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:27.435061   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:27.484043   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:27.484069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.038376   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:30.051467   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:30.051530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:30.086346   79191 cri.go:89] found id: ""
	I0816 00:37:30.086376   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.086386   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:30.086394   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:30.086454   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:30.127665   79191 cri.go:89] found id: ""
	I0816 00:37:30.127691   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.127699   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:30.127704   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:30.127757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:30.169901   79191 cri.go:89] found id: ""
	I0816 00:37:30.169929   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.169939   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:30.169950   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:30.170013   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:30.212501   79191 cri.go:89] found id: ""
	I0816 00:37:30.212523   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.212530   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:30.212537   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:30.212584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:30.256560   79191 cri.go:89] found id: ""
	I0816 00:37:30.256583   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.256591   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:30.256597   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:30.256646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:30.291062   79191 cri.go:89] found id: ""
	I0816 00:37:30.291086   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.291093   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:30.291099   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:30.291143   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:30.328325   79191 cri.go:89] found id: ""
	I0816 00:37:30.328353   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.328361   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:30.328368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:30.328415   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:30.364946   79191 cri.go:89] found id: ""
	I0816 00:37:30.364972   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.364981   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:30.364991   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:30.365005   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:30.408090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:30.408117   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.463421   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:30.463456   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:30.479679   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:30.479711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:30.555394   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:30.555416   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:30.555432   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:33.137366   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:33.150970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:33.151030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:33.191020   79191 cri.go:89] found id: ""
	I0816 00:37:33.191047   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.191055   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:33.191061   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:33.191112   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:33.227971   79191 cri.go:89] found id: ""
	I0816 00:37:33.228022   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.228030   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:33.228038   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:33.228089   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:33.265036   79191 cri.go:89] found id: ""
	I0816 00:37:33.265065   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.265074   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:33.265079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:33.265126   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:33.300385   79191 cri.go:89] found id: ""
	I0816 00:37:33.300411   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.300418   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:33.300425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:33.300487   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:33.335727   79191 cri.go:89] found id: ""
	I0816 00:37:33.335757   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.335768   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:33.335776   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:33.335839   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:33.373458   79191 cri.go:89] found id: ""
	I0816 00:37:33.373489   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.373500   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:33.373507   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:33.373568   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:33.410380   79191 cri.go:89] found id: ""
	I0816 00:37:33.410404   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.410413   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:33.410420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:33.410480   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:33.451007   79191 cri.go:89] found id: ""
	I0816 00:37:33.451030   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.451040   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:33.451049   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:33.451062   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:33.502215   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:33.502249   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:33.516123   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:33.516152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:33.590898   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:33.590921   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:33.590944   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:33.668404   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:33.668455   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:36.209671   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:36.223498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:36.223561   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:36.258980   79191 cri.go:89] found id: ""
	I0816 00:37:36.259041   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.259056   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:36.259064   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:36.259123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:36.293659   79191 cri.go:89] found id: ""
	I0816 00:37:36.293687   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.293694   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:36.293703   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:36.293761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:36.331729   79191 cri.go:89] found id: ""
	I0816 00:37:36.331756   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.331766   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:36.331773   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:36.331830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:36.368441   79191 cri.go:89] found id: ""
	I0816 00:37:36.368470   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.368479   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:36.368486   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:36.368533   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:36.405338   79191 cri.go:89] found id: ""
	I0816 00:37:36.405368   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.405380   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:36.405389   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:36.405448   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:36.441986   79191 cri.go:89] found id: ""
	I0816 00:37:36.442018   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.442029   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:36.442038   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:36.442097   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:36.478102   79191 cri.go:89] found id: ""
	I0816 00:37:36.478183   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.478197   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:36.478206   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:36.478269   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:36.517138   79191 cri.go:89] found id: ""
	I0816 00:37:36.517167   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.517178   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:36.517190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:36.517205   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:36.570009   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:36.570042   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:36.583534   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:36.583565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:36.651765   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:36.651794   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:36.651808   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:36.732836   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:36.732870   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:39.274490   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:39.288528   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:39.288591   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:39.325560   79191 cri.go:89] found id: ""
	I0816 00:37:39.325582   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.325589   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:39.325599   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:39.325656   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:39.365795   79191 cri.go:89] found id: ""
	I0816 00:37:39.365822   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.365829   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:39.365837   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:39.365906   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:39.404933   79191 cri.go:89] found id: ""
	I0816 00:37:39.404961   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.404971   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:39.404977   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:39.405041   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:39.442712   79191 cri.go:89] found id: ""
	I0816 00:37:39.442736   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.442747   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:39.442754   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:39.442814   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:39.484533   79191 cri.go:89] found id: ""
	I0816 00:37:39.484557   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.484566   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:39.484573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:39.484636   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:39.522089   79191 cri.go:89] found id: ""
	I0816 00:37:39.522115   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.522125   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:39.522133   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:39.522194   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:39.557099   79191 cri.go:89] found id: ""
	I0816 00:37:39.557128   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.557138   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:39.557145   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:39.557205   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:39.594809   79191 cri.go:89] found id: ""
	I0816 00:37:39.594838   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.594849   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:39.594859   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:39.594874   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:39.611079   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:39.611110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:39.683156   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:39.683182   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:39.683198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:39.761198   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:39.761235   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:39.800972   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:39.801003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:42.354816   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:42.368610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:42.368673   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:42.404716   79191 cri.go:89] found id: ""
	I0816 00:37:42.404738   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.404745   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:42.404753   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:42.404798   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:42.441619   79191 cri.go:89] found id: ""
	I0816 00:37:42.441649   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.441660   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:42.441667   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:42.441726   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:42.480928   79191 cri.go:89] found id: ""
	I0816 00:37:42.480965   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.480976   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:42.480983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:42.481051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:42.519187   79191 cri.go:89] found id: ""
	I0816 00:37:42.519216   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.519226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:42.519234   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:42.519292   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:42.554928   79191 cri.go:89] found id: ""
	I0816 00:37:42.554956   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.554967   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:42.554974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:42.555035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:42.593436   79191 cri.go:89] found id: ""
	I0816 00:37:42.593472   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.593481   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:42.593487   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:42.593545   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:42.628078   79191 cri.go:89] found id: ""
	I0816 00:37:42.628101   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.628108   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:42.628113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:42.628172   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:42.662824   79191 cri.go:89] found id: ""
	I0816 00:37:42.662852   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.662862   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:42.662871   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:42.662888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:42.677267   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:42.677290   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:42.749570   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:42.749599   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:42.749615   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:42.831177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:42.831213   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:42.871928   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:42.871957   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.430704   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:45.444400   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:45.444461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:45.479503   79191 cri.go:89] found id: ""
	I0816 00:37:45.479529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.479537   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:45.479543   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:45.479596   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:45.518877   79191 cri.go:89] found id: ""
	I0816 00:37:45.518907   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.518917   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:45.518925   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:45.518992   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:45.553936   79191 cri.go:89] found id: ""
	I0816 00:37:45.553966   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.553977   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:45.553984   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:45.554035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:45.593054   79191 cri.go:89] found id: ""
	I0816 00:37:45.593081   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.593088   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:45.593095   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:45.593147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:45.631503   79191 cri.go:89] found id: ""
	I0816 00:37:45.631529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.631537   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:45.631543   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:45.631599   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:45.667435   79191 cri.go:89] found id: ""
	I0816 00:37:45.667459   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.667466   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:45.667473   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:45.667529   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:45.702140   79191 cri.go:89] found id: ""
	I0816 00:37:45.702168   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.702179   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:45.702187   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:45.702250   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:45.736015   79191 cri.go:89] found id: ""
	I0816 00:37:45.736048   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.736059   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:45.736070   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:45.736085   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:45.817392   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:45.817427   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:45.856421   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:45.856451   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.912429   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:45.912476   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:45.928411   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:45.928435   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:46.001141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:48.501317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:48.515114   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:48.515190   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:48.553776   79191 cri.go:89] found id: ""
	I0816 00:37:48.553802   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.553810   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:48.553816   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:48.553890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:48.589760   79191 cri.go:89] found id: ""
	I0816 00:37:48.589786   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.589794   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:48.589800   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:48.589871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:48.629792   79191 cri.go:89] found id: ""
	I0816 00:37:48.629816   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.629825   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:48.629833   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:48.629898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:48.668824   79191 cri.go:89] found id: ""
	I0816 00:37:48.668852   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.668860   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:48.668866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:48.668930   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:48.704584   79191 cri.go:89] found id: ""
	I0816 00:37:48.704615   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.704626   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:48.704634   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:48.704691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:48.738833   79191 cri.go:89] found id: ""
	I0816 00:37:48.738855   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.738863   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:48.738868   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:48.738928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:48.774943   79191 cri.go:89] found id: ""
	I0816 00:37:48.774972   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.774981   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:48.774989   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:48.775051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:48.808802   79191 cri.go:89] found id: ""
	I0816 00:37:48.808825   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.808832   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:48.808841   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:48.808856   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:48.858849   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:48.858880   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:48.873338   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:48.873369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:48.950172   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:48.950195   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:48.950209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:49.038642   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:49.038679   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:51.581947   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:51.596612   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:51.596691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:51.631468   79191 cri.go:89] found id: ""
	I0816 00:37:51.631498   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.631509   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:51.631517   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:51.631577   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:51.666922   79191 cri.go:89] found id: ""
	I0816 00:37:51.666953   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.666963   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:51.666971   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:51.667034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:51.707081   79191 cri.go:89] found id: ""
	I0816 00:37:51.707109   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.707116   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:51.707122   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:51.707189   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:51.743884   79191 cri.go:89] found id: ""
	I0816 00:37:51.743912   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.743925   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:51.743932   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:51.743990   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:51.779565   79191 cri.go:89] found id: ""
	I0816 00:37:51.779595   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.779603   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:51.779610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:51.779658   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:51.818800   79191 cri.go:89] found id: ""
	I0816 00:37:51.818824   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.818831   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:51.818837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:51.818899   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:51.855343   79191 cri.go:89] found id: ""
	I0816 00:37:51.855367   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.855374   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:51.855380   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:51.855426   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:51.890463   79191 cri.go:89] found id: ""
	I0816 00:37:51.890496   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.890505   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:51.890513   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:51.890526   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:51.977168   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:51.977209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:52.021626   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:52.021660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:52.076983   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:52.077027   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:52.092111   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:52.092142   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:52.172738   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:54.673192   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:54.688780   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.688853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.725279   79191 cri.go:89] found id: ""
	I0816 00:37:54.725308   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.725318   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:54.725325   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.725383   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:54.764326   79191 cri.go:89] found id: ""
	I0816 00:37:54.764353   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.764364   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:54.764372   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:54.764423   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:54.805221   79191 cri.go:89] found id: ""
	I0816 00:37:54.805252   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.805263   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:54.805270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:54.805334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:54.849724   79191 cri.go:89] found id: ""
	I0816 00:37:54.849750   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.849759   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:54.849765   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:54.849824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:54.894438   79191 cri.go:89] found id: ""
	I0816 00:37:54.894460   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.894468   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:54.894475   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:54.894532   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:54.933400   79191 cri.go:89] found id: ""
	I0816 00:37:54.933422   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.933431   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:54.933439   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:54.933497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:54.982249   79191 cri.go:89] found id: ""
	I0816 00:37:54.982277   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.982286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:54.982294   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:54.982353   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:55.024431   79191 cri.go:89] found id: ""
	I0816 00:37:55.024458   79191 logs.go:276] 0 containers: []
	W0816 00:37:55.024469   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:55.024479   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.024499   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:55.107089   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:55.107119   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:55.148949   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.148981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.202865   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.202902   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.218528   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.218556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:55.304995   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:57.805335   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:57.819904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:57.819989   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:57.856119   79191 cri.go:89] found id: ""
	I0816 00:37:57.856146   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.856153   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:57.856160   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:57.856217   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:57.892797   79191 cri.go:89] found id: ""
	I0816 00:37:57.892825   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.892833   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:57.892841   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:57.892905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:57.928753   79191 cri.go:89] found id: ""
	I0816 00:37:57.928784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.928795   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:57.928803   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:57.928884   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:57.963432   79191 cri.go:89] found id: ""
	I0816 00:37:57.963462   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.963474   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:57.963481   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:57.963538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.998759   79191 cri.go:89] found id: ""
	I0816 00:37:57.998784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.998793   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:57.998801   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:57.998886   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:58.035262   79191 cri.go:89] found id: ""
	I0816 00:37:58.035288   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.035296   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:58.035303   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:58.035358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:58.071052   79191 cri.go:89] found id: ""
	I0816 00:37:58.071079   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.071087   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:58.071092   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:58.071150   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:58.110047   79191 cri.go:89] found id: ""
	I0816 00:37:58.110074   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.110083   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:58.110090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:58.110101   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:58.164792   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:58.164823   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:58.178742   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:58.178770   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:58.251861   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:58.251899   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:58.251921   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:58.329805   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:58.329859   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:00.872911   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:00.887914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:00.887986   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:00.925562   79191 cri.go:89] found id: ""
	I0816 00:38:00.925595   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.925606   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:00.925615   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:00.925669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:00.961476   79191 cri.go:89] found id: ""
	I0816 00:38:00.961498   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.961505   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:00.961510   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:00.961554   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:00.997575   79191 cri.go:89] found id: ""
	I0816 00:38:00.997599   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.997608   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:00.997616   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:00.997677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:01.035130   79191 cri.go:89] found id: ""
	I0816 00:38:01.035158   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.035169   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:01.035177   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:01.035232   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:01.073768   79191 cri.go:89] found id: ""
	I0816 00:38:01.073800   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.073811   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:01.073819   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:01.073898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:01.107904   79191 cri.go:89] found id: ""
	I0816 00:38:01.107928   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.107937   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:01.107943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:01.108004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:01.142654   79191 cri.go:89] found id: ""
	I0816 00:38:01.142690   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.142701   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:01.142709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:01.142766   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:01.187565   79191 cri.go:89] found id: ""
	I0816 00:38:01.187599   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.187610   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:01.187621   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:01.187635   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:01.265462   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:01.265493   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:01.265508   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:01.346988   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:01.347020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:01.390977   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:01.391006   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.443858   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:01.443892   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:03.959040   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:03.973674   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:03.973758   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:04.013606   79191 cri.go:89] found id: ""
	I0816 00:38:04.013653   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.013661   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:04.013667   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:04.013737   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:04.054558   79191 cri.go:89] found id: ""
	I0816 00:38:04.054590   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.054602   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:04.054609   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:04.054667   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:04.097116   79191 cri.go:89] found id: ""
	I0816 00:38:04.097143   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.097154   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:04.097162   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:04.097223   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:04.136770   79191 cri.go:89] found id: ""
	I0816 00:38:04.136798   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.136809   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:04.136816   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:04.136865   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:04.171906   79191 cri.go:89] found id: ""
	I0816 00:38:04.171929   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.171937   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:04.171943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:04.172004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:04.208694   79191 cri.go:89] found id: ""
	I0816 00:38:04.208725   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.208735   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:04.208744   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:04.208803   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:04.276713   79191 cri.go:89] found id: ""
	I0816 00:38:04.276744   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.276755   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:04.276763   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:04.276823   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:04.316646   79191 cri.go:89] found id: ""
	I0816 00:38:04.316669   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.316696   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:04.316707   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:04.316722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:04.329819   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:04.329864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:04.399032   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:04.399052   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:04.399080   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.487665   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:04.487698   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:04.530937   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.530962   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:07.087584   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:07.102015   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:07.102086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:07.139530   79191 cri.go:89] found id: ""
	I0816 00:38:07.139559   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.139569   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:07.139577   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:07.139642   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:07.179630   79191 cri.go:89] found id: ""
	I0816 00:38:07.179659   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.179669   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:07.179675   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:07.179734   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:07.216407   79191 cri.go:89] found id: ""
	I0816 00:38:07.216435   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.216444   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:07.216449   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:07.216509   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:07.252511   79191 cri.go:89] found id: ""
	I0816 00:38:07.252536   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.252544   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:07.252551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:07.252613   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:07.288651   79191 cri.go:89] found id: ""
	I0816 00:38:07.288679   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.288689   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:07.288698   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:07.288757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:07.325910   79191 cri.go:89] found id: ""
	I0816 00:38:07.325963   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.325974   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:07.325982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:07.326046   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:07.362202   79191 cri.go:89] found id: ""
	I0816 00:38:07.362230   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.362244   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:07.362251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:07.362316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:07.405272   79191 cri.go:89] found id: ""
	I0816 00:38:07.405302   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.405313   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:07.405324   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:07.405339   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:07.461186   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:07.461222   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:07.475503   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:07.475544   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:07.555146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:07.555165   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:07.555179   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:07.635162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:07.635201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.174600   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:10.190418   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:10.190479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:10.251925   79191 cri.go:89] found id: ""
	I0816 00:38:10.251960   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.251969   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:10.251974   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:10.252027   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:10.289038   79191 cri.go:89] found id: ""
	I0816 00:38:10.289078   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.289088   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:10.289096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:10.289153   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:10.334562   79191 cri.go:89] found id: ""
	I0816 00:38:10.334591   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.334601   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:10.334609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:10.334669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:10.371971   79191 cri.go:89] found id: ""
	I0816 00:38:10.372000   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.372010   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:10.372018   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:10.372084   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:10.409654   79191 cri.go:89] found id: ""
	I0816 00:38:10.409685   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.409696   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:10.409703   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:10.409770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:10.446639   79191 cri.go:89] found id: ""
	I0816 00:38:10.446666   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.446675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:10.446683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:10.446750   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:10.483601   79191 cri.go:89] found id: ""
	I0816 00:38:10.483629   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.483641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:10.483648   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:10.483707   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:10.519640   79191 cri.go:89] found id: ""
	I0816 00:38:10.519670   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.519679   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:10.519690   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:10.519704   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:10.603281   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:10.603300   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:10.603311   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:10.689162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:10.689198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.730701   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:10.730724   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:10.780411   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:10.780441   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:13.294689   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:13.308762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:13.308822   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:13.345973   79191 cri.go:89] found id: ""
	I0816 00:38:13.346004   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.346015   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:13.346022   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:13.346083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:13.382905   79191 cri.go:89] found id: ""
	I0816 00:38:13.382934   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.382945   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:13.382952   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:13.383001   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:13.417616   79191 cri.go:89] found id: ""
	I0816 00:38:13.417650   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.417662   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:13.417669   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:13.417739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:13.453314   79191 cri.go:89] found id: ""
	I0816 00:38:13.453350   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.453360   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:13.453368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:13.453435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:13.488507   79191 cri.go:89] found id: ""
	I0816 00:38:13.488536   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.488547   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:13.488555   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:13.488614   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:13.527064   79191 cri.go:89] found id: ""
	I0816 00:38:13.527095   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.527108   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:13.527116   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:13.527178   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:13.562838   79191 cri.go:89] found id: ""
	I0816 00:38:13.562867   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.562876   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:13.562882   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:13.562944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:13.598924   79191 cri.go:89] found id: ""
	I0816 00:38:13.598963   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.598974   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:13.598985   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:13.598999   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:13.651122   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:13.651156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:13.665255   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:13.665281   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:13.742117   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:13.742135   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:13.742148   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:13.824685   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:13.824719   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.366542   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:16.380855   79191 kubeadm.go:597] duration metric: took 4m3.665876253s to restartPrimaryControlPlane
	W0816 00:38:16.380919   79191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:38:16.380946   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:38:21.772367   79191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.39139467s)
	I0816 00:38:21.772449   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:21.788969   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:38:21.800050   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:38:21.811193   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:38:21.811216   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:38:21.811260   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:38:21.821328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:38:21.821391   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:38:21.831777   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:38:21.841357   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:38:21.841424   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:38:21.851564   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.861262   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:38:21.861322   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.871929   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:38:21.881544   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:38:21.881595   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:38:21.891725   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:38:22.120640   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:40:18.143220   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:40:18.143333   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:40:18.144757   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.144804   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.144888   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.145018   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.145134   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:18.145210   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:18.146791   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:18.146879   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:18.146965   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:18.147072   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:18.147164   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:18.147258   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:18.147340   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:18.147434   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:18.147525   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:18.147613   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:18.147708   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:18.147744   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:18.147791   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:18.147839   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:18.147916   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:18.147989   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:18.148045   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:18.148194   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:18.148318   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:18.148365   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:18.148458   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:18.149817   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:18.149941   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:18.150044   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:18.150107   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:18.150187   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:18.150323   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:40:18.150380   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:40:18.150460   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150671   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.150766   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150953   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151033   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151232   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151305   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151520   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151614   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151840   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151856   79191 kubeadm.go:310] 
	I0816 00:40:18.151917   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:40:18.151978   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:40:18.151992   79191 kubeadm.go:310] 
	I0816 00:40:18.152046   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:40:18.152097   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:40:18.152204   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:40:18.152218   79191 kubeadm.go:310] 
	I0816 00:40:18.152314   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:40:18.152349   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:40:18.152377   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:40:18.152384   79191 kubeadm.go:310] 
	I0816 00:40:18.152466   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:40:18.152537   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:40:18.152543   79191 kubeadm.go:310] 
	I0816 00:40:18.152674   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:40:18.152769   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:40:18.152853   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:40:18.152914   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:40:18.152978   79191 kubeadm.go:310] 
	W0816 00:40:18.153019   79191 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:40:18.153055   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:40:18.634058   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:40:18.648776   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:40:18.659504   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:40:18.659529   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:40:18.659584   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:40:18.670234   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:40:18.670285   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:40:18.680370   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:40:18.689496   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:40:18.689557   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:40:18.698949   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.708056   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:40:18.708118   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.718261   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:40:18.728708   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:40:18.728777   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:40:18.739253   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:40:18.819666   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.819746   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.966568   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.966704   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.966868   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:19.168323   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:19.170213   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:19.170335   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:19.170464   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:19.170546   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:19.170598   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:19.170670   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:19.170740   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:19.170828   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:19.170924   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:19.171031   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:19.171129   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:19.171179   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:19.171261   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:19.421256   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:19.585260   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:19.672935   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:19.928620   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:19.952420   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:19.953527   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:19.953578   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:20.090384   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:20.092904   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:20.093037   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:20.105743   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:20.106980   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:20.108199   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:20.111014   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:41:00.113053   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:41:00.113479   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:00.113752   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:05.113795   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:05.114091   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:15.114695   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:15.114932   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:35.116019   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:35.116207   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.116728   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:42:15.116994   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.117018   79191 kubeadm.go:310] 
	I0816 00:42:15.117071   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:42:15.117136   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:42:15.117147   79191 kubeadm.go:310] 
	I0816 00:42:15.117198   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:42:15.117248   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:42:15.117402   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:42:15.117412   79191 kubeadm.go:310] 
	I0816 00:42:15.117543   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:42:15.117601   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:42:15.117636   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:42:15.117644   79191 kubeadm.go:310] 
	I0816 00:42:15.117778   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:42:15.117918   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:42:15.117929   79191 kubeadm.go:310] 
	I0816 00:42:15.118083   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:42:15.118215   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:42:15.118313   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:42:15.118412   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:42:15.118433   79191 kubeadm.go:310] 
	I0816 00:42:15.118582   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:42:15.118698   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:42:15.118843   79191 kubeadm.go:394] duration metric: took 8m2.460648867s to StartCluster
	I0816 00:42:15.118855   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:42:15.118891   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:42:15.118957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:42:15.162809   79191 cri.go:89] found id: ""
	I0816 00:42:15.162837   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.162848   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:42:15.162855   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:42:15.162925   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:42:15.198020   79191 cri.go:89] found id: ""
	I0816 00:42:15.198042   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.198053   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:42:15.198063   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:42:15.198132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:42:15.238168   79191 cri.go:89] found id: ""
	I0816 00:42:15.238197   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.238206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:42:15.238213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:42:15.238273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:42:15.278364   79191 cri.go:89] found id: ""
	I0816 00:42:15.278391   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.278401   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:42:15.278407   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:42:15.278465   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:42:15.316182   79191 cri.go:89] found id: ""
	I0816 00:42:15.316209   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.316216   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:42:15.316222   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:42:15.316278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:42:15.352934   79191 cri.go:89] found id: ""
	I0816 00:42:15.352962   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.352970   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:42:15.352976   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:42:15.353031   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:42:15.388940   79191 cri.go:89] found id: ""
	I0816 00:42:15.388966   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.388973   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:42:15.388983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:42:15.389042   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:42:15.424006   79191 cri.go:89] found id: ""
	I0816 00:42:15.424035   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.424043   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:42:15.424054   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:42:15.424073   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:42:15.504823   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:42:15.504846   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:42:15.504858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:42:15.608927   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:42:15.608959   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:42:15.676785   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:42:15.676810   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:42:15.744763   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:42:15.744805   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0816 00:42:15.760944   79191 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:42:15.761012   79191 out.go:270] * 
	* 
	W0816 00:42:15.761078   79191 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.761098   79191 out.go:270] * 
	* 
	W0816 00:42:15.762220   79191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:42:15.765697   79191 out.go:201] 
	W0816 00:42:15.766942   79191 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.767018   79191 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:42:15.767040   79191 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:42:15.768526   79191 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-098619 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 2 (230.240846ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-098619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-098619 logs -n 25: (1.609310006s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-697641 sudo cat                              | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo find                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo crio                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-697641                                       | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-067133 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | disable-driver-mounts-067133                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:25 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-819398             | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC | 16 Aug 24 00:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-758469            | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-616827  | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:29:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:29:51.785297   79191 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:29:51.785388   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785392   79191 out.go:358] Setting ErrFile to fd 2...
	I0816 00:29:51.785396   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785578   79191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:29:51.786145   79191 out.go:352] Setting JSON to false
	I0816 00:29:51.787066   79191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7892,"bootTime":1723760300,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:29:51.787122   79191 start.go:139] virtualization: kvm guest
	I0816 00:29:51.789057   79191 out.go:177] * [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:29:51.790274   79191 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:29:51.790269   79191 notify.go:220] Checking for updates...
	I0816 00:29:51.792828   79191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:29:51.794216   79191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:29:51.795553   79191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:29:51.796761   79191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:29:51.798018   79191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:29:51.799561   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:29:51.799935   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.799990   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.814617   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0816 00:29:51.815056   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.815584   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.815606   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.815933   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.816131   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:51.817809   79191 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 00:29:51.819204   79191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:29:51.819604   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.819652   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.834270   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0816 00:29:51.834584   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.834992   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.835015   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.835303   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.835478   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:49.226097   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:51.870472   79191 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:29:51.872031   79191 start.go:297] selected driver: kvm2
	I0816 00:29:51.872049   79191 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.872137   79191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:29:51.872785   79191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.872848   79191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:29:51.887731   79191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:29:51.888078   79191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:29:51.888141   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:29:51.888154   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:29:51.888203   79191 start.go:340] cluster config:
	{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.888300   79191 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.890190   79191 out.go:177] * Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	I0816 00:29:51.891529   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:29:51.891557   79191 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:29:51.891565   79191 cache.go:56] Caching tarball of preloaded images
	I0816 00:29:51.891645   79191 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:29:51.891664   79191 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:29:51.891747   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:29:51.891915   79191 start.go:360] acquireMachinesLock for old-k8s-version-098619: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:29:55.306158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:58.378266   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:04.458137   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:07.530158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:13.610160   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:16.682057   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:22.762088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:25.834157   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:31.914106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:34.986091   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:41.066143   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:44.138152   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:50.218140   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:53.290166   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:59.370080   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:02.442130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:08.522126   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:11.594144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:17.674104   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:20.746185   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:26.826131   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:29.898113   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:35.978100   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:39.050136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:45.130120   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:48.202078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:54.282078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:57.354088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:03.434136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:06.506153   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:12.586125   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:15.658144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:21.738130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:24.810191   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:30.890130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:33.962132   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:40.042062   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:43.114154   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:49.194151   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:52.266130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:58.346106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:01.418139   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:04.422042   78713 start.go:364] duration metric: took 4m25.166768519s to acquireMachinesLock for "embed-certs-758469"
	I0816 00:33:04.422099   78713 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:04.422107   78713 fix.go:54] fixHost starting: 
	I0816 00:33:04.422426   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:04.422458   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:04.437335   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I0816 00:33:04.437779   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:04.438284   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:04.438306   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:04.438646   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:04.438873   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:04.439045   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:04.440597   78713 fix.go:112] recreateIfNeeded on embed-certs-758469: state=Stopped err=<nil>
	I0816 00:33:04.440627   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	W0816 00:33:04.440781   78713 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:04.442527   78713 out.go:177] * Restarting existing kvm2 VM for "embed-certs-758469" ...
	I0816 00:33:04.419735   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:04.419772   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420077   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:33:04.420102   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420299   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:33:04.421914   78489 machine.go:96] duration metric: took 4m37.429789672s to provisionDockerMachine
	I0816 00:33:04.421957   78489 fix.go:56] duration metric: took 4m37.451098771s for fixHost
	I0816 00:33:04.421965   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 4m37.451130669s
	W0816 00:33:04.421995   78489 start.go:714] error starting host: provision: host is not running
	W0816 00:33:04.422099   78489 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 00:33:04.422111   78489 start.go:729] Will try again in 5 seconds ...
	I0816 00:33:04.443838   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Start
	I0816 00:33:04.444035   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring networks are active...
	I0816 00:33:04.444849   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network default is active
	I0816 00:33:04.445168   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network mk-embed-certs-758469 is active
	I0816 00:33:04.445491   78713 main.go:141] libmachine: (embed-certs-758469) Getting domain xml...
	I0816 00:33:04.446159   78713 main.go:141] libmachine: (embed-certs-758469) Creating domain...
	I0816 00:33:05.654817   78713 main.go:141] libmachine: (embed-certs-758469) Waiting to get IP...
	I0816 00:33:05.655625   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.656020   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.656064   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.655983   79868 retry.go:31] will retry after 273.341379ms: waiting for machine to come up
	I0816 00:33:05.930542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.931038   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.931061   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.931001   79868 retry.go:31] will retry after 320.172619ms: waiting for machine to come up
	I0816 00:33:06.252718   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.253117   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.253140   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.253091   79868 retry.go:31] will retry after 441.386495ms: waiting for machine to come up
	I0816 00:33:06.695681   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.696108   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.696134   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.696065   79868 retry.go:31] will retry after 491.272986ms: waiting for machine to come up
	I0816 00:33:07.188683   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.189070   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.189092   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.189025   79868 retry.go:31] will retry after 536.865216ms: waiting for machine to come up
	I0816 00:33:07.727831   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.728246   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.728276   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.728193   79868 retry.go:31] will retry after 813.064342ms: waiting for machine to come up
	I0816 00:33:08.543096   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:08.543605   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:08.543637   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:08.543549   79868 retry.go:31] will retry after 1.00495091s: waiting for machine to come up
	I0816 00:33:09.424586   78489 start.go:360] acquireMachinesLock for no-preload-819398: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:33:09.549815   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:09.550226   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:09.550255   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:09.550175   79868 retry.go:31] will retry after 1.483015511s: waiting for machine to come up
	I0816 00:33:11.034879   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:11.035277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:11.035315   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:11.035224   79868 retry.go:31] will retry after 1.513237522s: waiting for machine to come up
	I0816 00:33:12.550817   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:12.551172   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:12.551196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:12.551126   79868 retry.go:31] will retry after 1.483165174s: waiting for machine to come up
	I0816 00:33:14.036748   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:14.037142   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:14.037170   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:14.037087   79868 retry.go:31] will retry after 1.772679163s: waiting for machine to come up
	I0816 00:33:15.811699   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:15.812300   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:15.812334   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:15.812226   79868 retry.go:31] will retry after 3.026936601s: waiting for machine to come up
	I0816 00:33:18.842362   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:18.842759   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:18.842788   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:18.842715   79868 retry.go:31] will retry after 4.400445691s: waiting for machine to come up
	I0816 00:33:23.247813   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248223   78713 main.go:141] libmachine: (embed-certs-758469) Found IP for machine: 192.168.39.185
	I0816 00:33:23.248254   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has current primary IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248265   78713 main.go:141] libmachine: (embed-certs-758469) Reserving static IP address...
	I0816 00:33:23.248613   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.248641   78713 main.go:141] libmachine: (embed-certs-758469) DBG | skip adding static IP to network mk-embed-certs-758469 - found existing host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"}
	I0816 00:33:23.248654   78713 main.go:141] libmachine: (embed-certs-758469) Reserved static IP address: 192.168.39.185
	I0816 00:33:23.248673   78713 main.go:141] libmachine: (embed-certs-758469) Waiting for SSH to be available...
	I0816 00:33:23.248687   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Getting to WaitForSSH function...
	I0816 00:33:23.250607   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.250931   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.250965   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.251113   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH client type: external
	I0816 00:33:23.251141   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa (-rw-------)
	I0816 00:33:23.251179   78713 main.go:141] libmachine: (embed-certs-758469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:23.251196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | About to run SSH command:
	I0816 00:33:23.251211   78713 main.go:141] libmachine: (embed-certs-758469) DBG | exit 0
	I0816 00:33:23.373899   78713 main.go:141] libmachine: (embed-certs-758469) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:23.374270   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetConfigRaw
	I0816 00:33:23.374914   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.377034   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377343   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.377370   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377561   78713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/config.json ...
	I0816 00:33:23.377760   78713 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:23.377776   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:23.378014   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.379950   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380248   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.380277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380369   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.380524   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380668   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380795   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.380950   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.381134   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.381145   78713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:23.486074   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:23.486106   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486462   78713 buildroot.go:166] provisioning hostname "embed-certs-758469"
	I0816 00:33:23.486491   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486677   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.489520   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.489905   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.489924   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.490108   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.490279   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490427   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490566   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.490730   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.490901   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.490920   78713 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-758469 && echo "embed-certs-758469" | sudo tee /etc/hostname
	I0816 00:33:23.614635   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-758469
	
	I0816 00:33:23.614671   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.617308   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617673   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.617701   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617881   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.618087   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618351   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.618536   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.618721   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.618746   78713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-758469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-758469/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-758469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:23.734901   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:23.734931   78713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:23.734946   78713 buildroot.go:174] setting up certificates
	I0816 00:33:23.734953   78713 provision.go:84] configureAuth start
	I0816 00:33:23.734961   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.735255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.737952   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738312   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.738341   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738445   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.740589   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.740926   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.740953   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.741060   78713 provision.go:143] copyHostCerts
	I0816 00:33:23.741121   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:23.741138   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:23.741203   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:23.741357   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:23.741367   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:23.741393   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:23.741452   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:23.741458   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:23.741478   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:23.741525   78713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.embed-certs-758469 san=[127.0.0.1 192.168.39.185 embed-certs-758469 localhost minikube]
	I0816 00:33:23.871115   78713 provision.go:177] copyRemoteCerts
	I0816 00:33:23.871167   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:23.871190   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.874049   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874505   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.874538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874720   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.874913   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.875079   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.875210   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:23.959910   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:23.984454   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:33:24.009067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:24.036195   78713 provision.go:87] duration metric: took 301.229994ms to configureAuth
	I0816 00:33:24.036218   78713 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:24.036389   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:24.036453   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.039196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.039562   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039771   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.039970   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040125   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040224   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.040372   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.040584   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.040612   78713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:24.550693   78747 start.go:364] duration metric: took 4m44.527028624s to acquireMachinesLock for "default-k8s-diff-port-616827"
	I0816 00:33:24.550757   78747 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:24.550763   78747 fix.go:54] fixHost starting: 
	I0816 00:33:24.551164   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:24.551203   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:24.567741   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0816 00:33:24.568138   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:24.568674   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:33:24.568703   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:24.569017   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:24.569212   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:24.569385   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:33:24.570856   78747 fix.go:112] recreateIfNeeded on default-k8s-diff-port-616827: state=Stopped err=<nil>
	I0816 00:33:24.570901   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	W0816 00:33:24.571074   78747 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:24.572673   78747 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-616827" ...
	I0816 00:33:24.574220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Start
	I0816 00:33:24.574403   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring networks are active...
	I0816 00:33:24.575086   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network default is active
	I0816 00:33:24.575528   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network mk-default-k8s-diff-port-616827 is active
	I0816 00:33:24.576033   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Getting domain xml...
	I0816 00:33:24.576734   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Creating domain...
	I0816 00:33:24.314921   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:24.314951   78713 machine.go:96] duration metric: took 937.178488ms to provisionDockerMachine
	I0816 00:33:24.314964   78713 start.go:293] postStartSetup for "embed-certs-758469" (driver="kvm2")
	I0816 00:33:24.314974   78713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:24.315007   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.315405   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:24.315430   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.317962   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318242   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.318270   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318390   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.318588   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.318763   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.318900   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.400628   78713 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:24.405061   78713 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:24.405082   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:24.405148   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:24.405215   78713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:24.405302   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:24.414985   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:24.439646   78713 start.go:296] duration metric: took 124.668147ms for postStartSetup
	I0816 00:33:24.439692   78713 fix.go:56] duration metric: took 20.017583324s for fixHost
	I0816 00:33:24.439719   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.442551   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.442920   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.442954   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.443051   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.443257   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443434   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443567   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.443740   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.443912   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.443921   78713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:24.550562   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768404.525876526
	
	I0816 00:33:24.550588   78713 fix.go:216] guest clock: 1723768404.525876526
	I0816 00:33:24.550599   78713 fix.go:229] Guest: 2024-08-16 00:33:24.525876526 +0000 UTC Remote: 2024-08-16 00:33:24.439696953 +0000 UTC m=+285.318245053 (delta=86.179573ms)
	I0816 00:33:24.550618   78713 fix.go:200] guest clock delta is within tolerance: 86.179573ms
	I0816 00:33:24.550623   78713 start.go:83] releasing machines lock for "embed-certs-758469", held for 20.128541713s
	I0816 00:33:24.550647   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.551090   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:24.554013   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554358   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.554382   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554572   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555062   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555222   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555279   78713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:24.555330   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.555441   78713 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:24.555463   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.558216   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558368   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558567   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558719   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558723   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558742   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558925   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559074   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559205   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559285   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.559329   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.656926   78713 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:24.662590   78713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:24.811290   78713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:24.817486   78713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:24.817570   78713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:24.838317   78713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:24.838342   78713 start.go:495] detecting cgroup driver to use...
	I0816 00:33:24.838396   78713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:24.856294   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:24.875603   78713 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:24.875650   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:24.890144   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:24.904327   78713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:25.018130   78713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:25.149712   78713 docker.go:233] disabling docker service ...
	I0816 00:33:25.149795   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:25.165494   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:25.179554   78713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:25.330982   78713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:25.476436   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:25.493242   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:25.515688   78713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:25.515762   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.529924   78713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:25.529997   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.541412   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.551836   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.563356   78713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:25.574486   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.585533   78713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.604169   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.615335   78713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:25.629366   78713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:25.629427   78713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:25.645937   78713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:25.657132   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:25.771891   78713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:25.914817   78713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:25.914904   78713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:25.919572   78713 start.go:563] Will wait 60s for crictl version
	I0816 00:33:25.919620   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:33:25.923419   78713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:25.969387   78713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:25.969484   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.002529   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.035709   78713 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:26.036921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:26.039638   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040001   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:26.040023   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040254   78713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:26.044444   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:26.057172   78713 kubeadm.go:883] updating cluster {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:26.057326   78713 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:26.057382   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:26.093950   78713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:26.094031   78713 ssh_runner.go:195] Run: which lz4
	I0816 00:33:26.097998   78713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:26.102152   78713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:26.102183   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:27.538323   78713 crio.go:462] duration metric: took 1.440354469s to copy over tarball
	I0816 00:33:27.538400   78713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:25.885210   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting to get IP...
	I0816 00:33:25.886135   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886555   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:25.886538   80004 retry.go:31] will retry after 214.751664ms: waiting for machine to come up
	I0816 00:33:26.103182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103652   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103677   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.103603   80004 retry.go:31] will retry after 239.667632ms: waiting for machine to come up
	I0816 00:33:26.345223   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345776   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.345701   80004 retry.go:31] will retry after 474.740445ms: waiting for machine to come up
	I0816 00:33:26.822224   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822682   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822716   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.822639   80004 retry.go:31] will retry after 574.324493ms: waiting for machine to come up
	I0816 00:33:27.398433   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398939   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398971   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.398904   80004 retry.go:31] will retry after 567.388033ms: waiting for machine to come up
	I0816 00:33:27.967686   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968225   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.968093   80004 retry.go:31] will retry after 940.450394ms: waiting for machine to come up
	I0816 00:33:28.910549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911058   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911088   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:28.911031   80004 retry.go:31] will retry after 919.494645ms: waiting for machine to come up
	I0816 00:33:29.832687   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833204   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833244   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:29.833189   80004 retry.go:31] will retry after 1.332024716s: waiting for machine to come up
	I0816 00:33:29.677224   78713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.138774475s)
	I0816 00:33:29.677252   78713 crio.go:469] duration metric: took 2.138901242s to extract the tarball
	I0816 00:33:29.677261   78713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:29.716438   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:29.768597   78713 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:29.768622   78713 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:29.768634   78713 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.0 crio true true} ...
	I0816 00:33:29.768787   78713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-758469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:29.768874   78713 ssh_runner.go:195] Run: crio config
	I0816 00:33:29.813584   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:29.813607   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:29.813620   78713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:29.813644   78713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-758469 NodeName:embed-certs-758469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:29.813776   78713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-758469"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:29.813862   78713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:29.825680   78713 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:29.825744   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:29.836314   78713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 00:33:29.853030   78713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:29.869368   78713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 00:33:29.886814   78713 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:29.890644   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:29.903138   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:30.040503   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:30.058323   78713 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469 for IP: 192.168.39.185
	I0816 00:33:30.058351   78713 certs.go:194] generating shared ca certs ...
	I0816 00:33:30.058372   78713 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:30.058559   78713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:30.058624   78713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:30.058638   78713 certs.go:256] generating profile certs ...
	I0816 00:33:30.058778   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/client.key
	I0816 00:33:30.058873   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key.0d0e36ad
	I0816 00:33:30.058930   78713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key
	I0816 00:33:30.059101   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:30.059146   78713 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:30.059162   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:30.059197   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:30.059251   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:30.059285   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:30.059345   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:30.060202   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:30.098381   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:30.135142   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:30.175518   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:30.214349   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 00:33:30.249278   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:30.273772   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:30.298067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:30.324935   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:30.351149   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:30.375636   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:30.399250   78713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:30.417646   78713 ssh_runner.go:195] Run: openssl version
	I0816 00:33:30.423691   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:30.435254   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439651   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439700   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.445673   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:30.456779   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:30.467848   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472199   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472274   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.478109   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:30.489481   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:30.500747   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505116   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505162   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.510739   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:30.521829   78713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:30.526444   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:30.532373   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:30.538402   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:30.544697   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:30.550762   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:30.556573   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:30.562513   78713 kubeadm.go:392] StartCluster: {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:30.562602   78713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:30.562650   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.607119   78713 cri.go:89] found id: ""
	I0816 00:33:30.607197   78713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:30.617798   78713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:30.617818   78713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:30.617873   78713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:30.627988   78713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:30.628976   78713 kubeconfig.go:125] found "embed-certs-758469" server: "https://192.168.39.185:8443"
	I0816 00:33:30.631601   78713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:30.642001   78713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.185
	I0816 00:33:30.642036   78713 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:30.642047   78713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:30.642088   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.685946   78713 cri.go:89] found id: ""
	I0816 00:33:30.686049   78713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:30.704130   78713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:30.714467   78713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:30.714490   78713 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:30.714534   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:33:30.723924   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:30.723985   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:30.733804   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:33:30.743345   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:30.743412   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:30.753604   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.763271   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:30.763340   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.773121   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:33:30.782507   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:30.782565   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:30.792652   78713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:30.802523   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:30.923193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.206424   78713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.283195087s)
	I0816 00:33:32.206449   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.435275   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.509193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.590924   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:32.591020   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.091804   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.591198   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.607568   78713 api_server.go:72] duration metric: took 1.016656713s to wait for apiserver process to appear ...
	I0816 00:33:33.607596   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:33.607619   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:31.166506   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166900   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166927   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:31.166860   80004 retry.go:31] will retry after 1.213971674s: waiting for machine to come up
	I0816 00:33:32.382376   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382862   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382889   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:32.382821   80004 retry.go:31] will retry after 2.115615681s: waiting for machine to come up
	I0816 00:33:34.501236   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501697   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:34.501646   80004 retry.go:31] will retry after 2.495252025s: waiting for machine to come up
	I0816 00:33:36.334341   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.334374   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.334389   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.351971   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.352011   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.608364   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.614582   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:36.614619   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.107654   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.113352   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.113384   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.607902   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.614677   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.614710   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.108329   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.112493   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.112521   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.608061   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.613134   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.613172   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.107667   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.111920   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:39.111954   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.608190   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.613818   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:33:39.619467   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:39.619490   78713 api_server.go:131] duration metric: took 6.011887872s to wait for apiserver health ...
	I0816 00:33:39.619499   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:39.619504   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:39.621572   78713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:36.999158   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999616   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999645   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:36.999576   80004 retry.go:31] will retry after 2.736710806s: waiting for machine to come up
	I0816 00:33:39.737818   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738286   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738320   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:39.738215   80004 retry.go:31] will retry after 3.3205645s: waiting for machine to come up
	I0816 00:33:39.623254   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:39.633910   78713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:39.653736   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:39.663942   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:39.663983   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:39.663994   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:39.664044   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:39.664060   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:39.664067   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:33:39.664078   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:39.664089   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:39.664107   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:33:39.664118   78713 system_pods.go:74] duration metric: took 10.358906ms to wait for pod list to return data ...
	I0816 00:33:39.664127   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:39.667639   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:39.667669   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:39.667682   78713 node_conditions.go:105] duration metric: took 3.547018ms to run NodePressure ...
	I0816 00:33:39.667701   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:39.929620   78713 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934264   78713 kubeadm.go:739] kubelet initialised
	I0816 00:33:39.934289   78713 kubeadm.go:740] duration metric: took 4.64037ms waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934299   78713 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:39.938771   78713 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.943735   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943760   78713 pod_ready.go:82] duration metric: took 4.962601ms for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.943772   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943781   78713 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.947900   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947925   78713 pod_ready.go:82] duration metric: took 4.129605ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.947936   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947943   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.953367   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953400   78713 pod_ready.go:82] duration metric: took 5.445682ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.953412   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953422   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.057510   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057533   78713 pod_ready.go:82] duration metric: took 104.099944ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.057543   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057548   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.458355   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458389   78713 pod_ready.go:82] duration metric: took 400.832009ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.458400   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458408   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.857939   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857964   78713 pod_ready.go:82] duration metric: took 399.549123ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.857974   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857980   78713 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:41.257101   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257126   78713 pod_ready.go:82] duration metric: took 399.13078ms for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:41.257135   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257142   78713 pod_ready.go:39] duration metric: took 1.322827054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:41.257159   78713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:33:41.269076   78713 ops.go:34] apiserver oom_adj: -16
	I0816 00:33:41.269098   78713 kubeadm.go:597] duration metric: took 10.651273415s to restartPrimaryControlPlane
	I0816 00:33:41.269107   78713 kubeadm.go:394] duration metric: took 10.706599955s to StartCluster
	I0816 00:33:41.269127   78713 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.269191   78713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:33:41.271380   78713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.271679   78713 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:33:41.271714   78713 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:33:41.271812   78713 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-758469"
	I0816 00:33:41.271834   78713 addons.go:69] Setting default-storageclass=true in profile "embed-certs-758469"
	I0816 00:33:41.271845   78713 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-758469"
	W0816 00:33:41.271858   78713 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:33:41.271874   78713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-758469"
	I0816 00:33:41.271882   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:41.271891   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.271860   78713 addons.go:69] Setting metrics-server=true in profile "embed-certs-758469"
	I0816 00:33:41.271934   78713 addons.go:234] Setting addon metrics-server=true in "embed-certs-758469"
	W0816 00:33:41.271952   78713 addons.go:243] addon metrics-server should already be in state true
	I0816 00:33:41.272022   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.272324   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272575   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272604   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272704   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272718   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272745   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.274599   78713 out.go:177] * Verifying Kubernetes components...
	I0816 00:33:41.276283   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:41.292526   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0816 00:33:41.292560   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0816 00:33:41.292556   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I0816 00:33:41.293000   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293053   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293004   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293482   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293499   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293592   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293606   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293625   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293607   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293891   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293939   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293976   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.294132   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.294475   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294483   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294517   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.294522   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.297714   78713 addons.go:234] Setting addon default-storageclass=true in "embed-certs-758469"
	W0816 00:33:41.297747   78713 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:33:41.297787   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.298192   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.298238   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.310002   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0816 00:33:41.310000   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0816 00:33:41.310469   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310521   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310899   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.310917   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311027   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.311048   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311293   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311476   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.311491   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311642   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.313614   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.313697   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.315474   78713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:33:41.315484   78713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:33:41.316719   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0816 00:33:41.316887   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:33:41.316902   78713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:33:41.316921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.316975   78713 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.316985   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:33:41.316995   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.317061   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.317572   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.317594   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.317941   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.318669   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.318702   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.320288   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320668   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.320695   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320726   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320939   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321241   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.321267   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.321402   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321497   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.321547   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321592   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.321883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.322021   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.334230   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0816 00:33:41.334580   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.335088   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.335107   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.335387   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.335549   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.336891   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.337084   78713 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.337100   78713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:33:41.337115   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.340204   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340667   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.340697   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340837   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.340987   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.341120   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.341277   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.476131   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:41.502242   78713 node_ready.go:35] waiting up to 6m0s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:41.559562   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.575913   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:33:41.575937   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:33:41.614763   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:33:41.614784   78713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:33:41.628658   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.670367   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:41.670393   78713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:33:41.746638   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:42.849125   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.22043382s)
	I0816 00:33:42.849189   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849202   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849397   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.289807606s)
	I0816 00:33:42.849438   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849448   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849478   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849514   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849524   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849538   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849550   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849761   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849803   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849813   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849825   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849833   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.850018   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850033   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.850059   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850059   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.850078   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856398   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.856419   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.856647   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.856667   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856676   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901261   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1545817s)
	I0816 00:33:42.901314   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901329   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901619   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901680   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901694   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901704   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901713   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901953   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901973   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901986   78713 addons.go:475] Verifying addon metrics-server=true in "embed-certs-758469"
	I0816 00:33:42.904677   78713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 00:33:42.905802   78713 addons.go:510] duration metric: took 1.634089536s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 00:33:43.506584   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:44.254575   79191 start.go:364] duration metric: took 3m52.362627542s to acquireMachinesLock for "old-k8s-version-098619"
	I0816 00:33:44.254648   79191 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:44.254659   79191 fix.go:54] fixHost starting: 
	I0816 00:33:44.255099   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:44.255137   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:44.271236   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0816 00:33:44.271591   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:44.272030   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:33:44.272052   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:44.272328   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:44.272503   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:33:44.272660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetState
	I0816 00:33:44.274235   79191 fix.go:112] recreateIfNeeded on old-k8s-version-098619: state=Stopped err=<nil>
	I0816 00:33:44.274272   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	W0816 00:33:44.274415   79191 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:44.275978   79191 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-098619" ...
	I0816 00:33:43.059949   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060413   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Found IP for machine: 192.168.50.128
	I0816 00:33:43.060440   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserving static IP address...
	I0816 00:33:43.060479   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has current primary IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060881   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.060906   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | skip adding static IP to network mk-default-k8s-diff-port-616827 - found existing host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"}
	I0816 00:33:43.060921   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserved static IP address: 192.168.50.128
	I0816 00:33:43.060937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for SSH to be available...
	I0816 00:33:43.060952   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Getting to WaitForSSH function...
	I0816 00:33:43.063249   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063552   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.063592   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH client type: external
	I0816 00:33:43.063833   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa (-rw-------)
	I0816 00:33:43.063877   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:43.063896   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | About to run SSH command:
	I0816 00:33:43.063905   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | exit 0
	I0816 00:33:43.185986   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:43.186338   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetConfigRaw
	I0816 00:33:43.186944   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.189324   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189617   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.189643   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189890   78747 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/config.json ...
	I0816 00:33:43.190166   78747 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:43.190192   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:43.190401   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.192515   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192836   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.192865   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192940   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.193118   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193280   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193454   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.193614   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.193812   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.193825   78747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:43.290143   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:43.290168   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290395   78747 buildroot.go:166] provisioning hostname "default-k8s-diff-port-616827"
	I0816 00:33:43.290422   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290603   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.293231   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.293665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293829   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.294038   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294195   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294325   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.294479   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.294685   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.294703   78747 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-616827 && echo "default-k8s-diff-port-616827" | sudo tee /etc/hostname
	I0816 00:33:43.406631   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-616827
	
	I0816 00:33:43.406655   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.409271   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409610   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.409641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409794   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.409984   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410160   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.410491   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.410670   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.410695   78747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-616827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-616827/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-616827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:43.515766   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:43.515796   78747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:43.515829   78747 buildroot.go:174] setting up certificates
	I0816 00:33:43.515841   78747 provision.go:84] configureAuth start
	I0816 00:33:43.515850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.516128   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.518730   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519055   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.519087   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.521186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.521538   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521691   78747 provision.go:143] copyHostCerts
	I0816 00:33:43.521746   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:43.521764   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:43.521822   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:43.521949   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:43.521959   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:43.521982   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:43.522050   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:43.522057   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:43.522074   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:43.522132   78747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-616827 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-616827 localhost minikube]
	I0816 00:33:43.601126   78747 provision.go:177] copyRemoteCerts
	I0816 00:33:43.601179   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:43.601203   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.603816   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604148   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.604180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.604549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.604725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.604863   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:43.686829   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:43.712297   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 00:33:43.738057   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:43.762820   78747 provision.go:87] duration metric: took 246.967064ms to configureAuth
	I0816 00:33:43.762853   78747 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:43.763069   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:43.763155   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.765886   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766256   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.766287   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766447   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.766641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766813   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.767164   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.767318   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.767334   78747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:44.025337   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:44.025373   78747 machine.go:96] duration metric: took 835.190539ms to provisionDockerMachine
	I0816 00:33:44.025387   78747 start.go:293] postStartSetup for "default-k8s-diff-port-616827" (driver="kvm2")
	I0816 00:33:44.025401   78747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:44.025416   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.025780   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:44.025804   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.028307   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028591   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.028618   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028740   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.028925   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.029117   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.029281   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.109481   78747 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:44.115290   78747 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:44.115317   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:44.115388   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:44.115482   78747 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:44.115597   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:44.128677   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:44.154643   78747 start.go:296] duration metric: took 129.242138ms for postStartSetup
	I0816 00:33:44.154685   78747 fix.go:56] duration metric: took 19.603921801s for fixHost
	I0816 00:33:44.154705   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.157477   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.157907   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.157937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.158051   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.158264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158411   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158580   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.158757   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:44.158981   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:44.158996   78747 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:44.254419   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768424.226223949
	
	I0816 00:33:44.254443   78747 fix.go:216] guest clock: 1723768424.226223949
	I0816 00:33:44.254452   78747 fix.go:229] Guest: 2024-08-16 00:33:44.226223949 +0000 UTC Remote: 2024-08-16 00:33:44.154688835 +0000 UTC m=+304.265683075 (delta=71.535114ms)
	I0816 00:33:44.254476   78747 fix.go:200] guest clock delta is within tolerance: 71.535114ms
	I0816 00:33:44.254482   78747 start.go:83] releasing machines lock for "default-k8s-diff-port-616827", held for 19.703745588s
	I0816 00:33:44.254504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.254750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:44.257516   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.257879   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.257910   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.258111   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258828   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258908   78747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:44.258946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.259033   78747 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:44.259048   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.261566   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261814   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261978   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262008   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262112   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262145   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262254   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262390   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262442   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262502   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.262549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262642   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.346934   78747 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:44.370413   78747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:44.519130   78747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:44.525276   78747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:44.525344   78747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:44.549125   78747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:44.549154   78747 start.go:495] detecting cgroup driver to use...
	I0816 00:33:44.549227   78747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:44.575221   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:44.592214   78747 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:44.592270   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:44.607403   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:44.629127   78747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:44.786185   78747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:44.954426   78747 docker.go:233] disabling docker service ...
	I0816 00:33:44.954495   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:44.975169   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:44.994113   78747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:45.142572   78747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:45.297255   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:45.313401   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:45.334780   78747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:45.334851   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.346039   78747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:45.346111   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.357681   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.368607   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.381164   78747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:45.394060   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.406010   78747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.424720   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.437372   78747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:45.450515   78747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:45.450595   78747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:45.465740   78747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:45.476568   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:45.629000   78747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:45.781044   78747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:45.781142   78747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:45.787480   78747 start.go:563] Will wait 60s for crictl version
	I0816 00:33:45.787551   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:33:45.791907   78747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:45.836939   78747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:45.837025   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.869365   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.907162   78747 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:44.277288   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .Start
	I0816 00:33:44.277426   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring networks are active...
	I0816 00:33:44.278141   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network default is active
	I0816 00:33:44.278471   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network mk-old-k8s-version-098619 is active
	I0816 00:33:44.278820   79191 main.go:141] libmachine: (old-k8s-version-098619) Getting domain xml...
	I0816 00:33:44.279523   79191 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:33:45.643704   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting to get IP...
	I0816 00:33:45.644691   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.645213   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.645247   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.645162   80212 retry.go:31] will retry after 198.057532ms: waiting for machine to come up
	I0816 00:33:45.844756   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.845297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.845321   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.845247   80212 retry.go:31] will retry after 288.630433ms: waiting for machine to come up
	I0816 00:33:46.135913   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.136413   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.136442   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.136365   80212 retry.go:31] will retry after 456.48021ms: waiting for machine to come up
	I0816 00:33:46.594170   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.594649   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.594678   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.594592   80212 retry.go:31] will retry after 501.49137ms: waiting for machine to come up
	I0816 00:33:46.006040   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:47.007144   78713 node_ready.go:49] node "embed-certs-758469" has status "Ready":"True"
	I0816 00:33:47.007172   78713 node_ready.go:38] duration metric: took 5.504897396s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:47.007183   78713 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:47.014800   78713 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:49.022567   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:45.908518   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:45.912248   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.912762   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:45.912797   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.913115   78747 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:45.917917   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:45.935113   78747 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:45.935294   78747 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:45.935351   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:45.988031   78747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:45.988115   78747 ssh_runner.go:195] Run: which lz4
	I0816 00:33:45.992508   78747 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:45.997108   78747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:45.997199   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:47.459404   78747 crio.go:462] duration metric: took 1.466928999s to copy over tarball
	I0816 00:33:47.459478   78747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:49.621449   78747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16194292s)
	I0816 00:33:49.621484   78747 crio.go:469] duration metric: took 2.162054092s to extract the tarball
	I0816 00:33:49.621494   78747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:49.660378   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:49.709446   78747 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:49.709471   78747 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:49.709481   78747 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.0 crio true true} ...
	I0816 00:33:49.709609   78747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-616827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:49.709704   78747 ssh_runner.go:195] Run: crio config
	I0816 00:33:49.756470   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:49.756497   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:49.756510   78747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:49.756534   78747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-616827 NodeName:default-k8s-diff-port-616827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:49.756745   78747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-616827"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:49.756827   78747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:49.766769   78747 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:49.766840   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:49.776367   78747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 00:33:49.793191   78747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:49.811993   78747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 00:33:49.829787   78747 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:49.833673   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:49.846246   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:47.098130   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.098614   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.098645   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.098569   80212 retry.go:31] will retry after 663.568587ms: waiting for machine to come up
	I0816 00:33:47.763930   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.764447   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.764470   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.764376   80212 retry.go:31] will retry after 679.581678ms: waiting for machine to come up
	I0816 00:33:48.446082   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:48.446552   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:48.446579   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:48.446498   80212 retry.go:31] will retry after 1.090430732s: waiting for machine to come up
	I0816 00:33:49.538961   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:49.539454   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:49.539482   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:49.539397   80212 retry.go:31] will retry after 1.039148258s: waiting for machine to come up
	I0816 00:33:50.579642   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:50.580119   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:50.580144   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:50.580074   80212 retry.go:31] will retry after 1.440992413s: waiting for machine to come up
	I0816 00:33:51.788858   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:54.022577   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:49.963020   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:49.980142   78747 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827 for IP: 192.168.50.128
	I0816 00:33:49.980170   78747 certs.go:194] generating shared ca certs ...
	I0816 00:33:49.980192   78747 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:49.980408   78747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:49.980470   78747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:49.980489   78747 certs.go:256] generating profile certs ...
	I0816 00:33:49.980583   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/client.key
	I0816 00:33:49.980669   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key.2062a467
	I0816 00:33:49.980737   78747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key
	I0816 00:33:49.980891   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:49.980940   78747 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:49.980949   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:49.980984   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:49.981021   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:49.981050   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:49.981102   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:49.981835   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:50.014530   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:50.057377   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:50.085730   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:50.121721   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 00:33:50.166448   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:50.195059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:50.220059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:50.244288   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:50.268463   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:50.293203   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:50.318859   78747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:50.336625   78747 ssh_runner.go:195] Run: openssl version
	I0816 00:33:50.343301   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:50.355408   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360245   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360312   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.366435   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:50.377753   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:50.389482   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394337   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394419   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.400279   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:50.412410   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:50.424279   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429013   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429077   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.435095   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:50.448148   78747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:50.453251   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:50.459730   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:50.466145   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:50.472438   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:50.478701   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:50.485081   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:50.490958   78747 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:50.491091   78747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:50.491173   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.545458   78747 cri.go:89] found id: ""
	I0816 00:33:50.545532   78747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:50.557054   78747 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:50.557074   78747 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:50.557122   78747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:50.570313   78747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:50.571774   78747 kubeconfig.go:125] found "default-k8s-diff-port-616827" server: "https://192.168.50.128:8444"
	I0816 00:33:50.574969   78747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:50.586066   78747 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I0816 00:33:50.586101   78747 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:50.586114   78747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:50.586172   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.631347   78747 cri.go:89] found id: ""
	I0816 00:33:50.631416   78747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:50.651296   78747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:50.665358   78747 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:50.665387   78747 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:50.665427   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 00:33:50.678634   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:50.678706   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:50.690376   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 00:33:50.702070   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:50.702132   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:50.714117   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.725349   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:50.725413   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.735691   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 00:33:50.745524   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:50.745598   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:50.756310   78747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:50.771825   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:50.908593   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.046812   78747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138178717s)
	I0816 00:33:52.046863   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.282111   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.357877   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.485435   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:52.485531   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:52.985717   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.486461   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.522663   78747 api_server.go:72] duration metric: took 1.037234176s to wait for apiserver process to appear ...
	I0816 00:33:53.522692   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:53.522713   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:52.022573   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:52.023319   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:52.023352   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:52.023226   80212 retry.go:31] will retry after 1.814668747s: waiting for machine to come up
	I0816 00:33:53.839539   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:53.839916   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:53.839944   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:53.839861   80212 retry.go:31] will retry after 1.900379439s: waiting for machine to come up
	I0816 00:33:55.742480   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:55.742981   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:55.743004   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:55.742920   80212 retry.go:31] will retry after 2.798728298s: waiting for machine to come up
	I0816 00:33:56.782681   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.782714   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:56.782730   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:56.828595   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.828628   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:57.022870   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.028291   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.028326   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:57.522858   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.533079   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.533120   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.023304   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.029913   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:58.029948   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.523517   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.529934   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:33:58.536872   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:58.536898   78747 api_server.go:131] duration metric: took 5.014199256s to wait for apiserver health ...
	I0816 00:33:58.536907   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:58.536916   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:58.539004   78747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:54.522157   78713 pod_ready.go:93] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.522186   78713 pod_ready.go:82] duration metric: took 7.507358513s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.522201   78713 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529305   78713 pod_ready.go:93] pod "etcd-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.529323   78713 pod_ready.go:82] duration metric: took 7.114484ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529331   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536656   78713 pod_ready.go:93] pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.536688   78713 pod_ready.go:82] duration metric: took 7.349231ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536701   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542615   78713 pod_ready.go:93] pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.542637   78713 pod_ready.go:82] duration metric: took 5.927403ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542650   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548165   78713 pod_ready.go:93] pod "kube-proxy-4xc89" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.548188   78713 pod_ready.go:82] duration metric: took 5.530073ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548200   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919561   78713 pod_ready.go:93] pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.919586   78713 pod_ready.go:82] duration metric: took 371.377774ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919598   78713 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:56.925892   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.926811   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.540592   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:58.554493   78747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:58.594341   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:58.605247   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:58.605293   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:58.605304   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:58.605314   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:58.605329   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:58.605342   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:33:58.605351   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:58.605358   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:58.605363   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:33:58.605372   78747 system_pods.go:74] duration metric: took 11.009517ms to wait for pod list to return data ...
	I0816 00:33:58.605384   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:58.609964   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:58.609996   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:58.610007   78747 node_conditions.go:105] duration metric: took 4.615471ms to run NodePressure ...
	I0816 00:33:58.610025   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:58.930292   78747 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937469   78747 kubeadm.go:739] kubelet initialised
	I0816 00:33:58.937499   78747 kubeadm.go:740] duration metric: took 7.181814ms waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937509   78747 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:59.036968   78747 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.046554   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046589   78747 pod_ready.go:82] duration metric: took 9.589918ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.046601   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046618   78747 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.053621   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053654   78747 pod_ready.go:82] duration metric: took 7.022323ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.053669   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053678   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.065329   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065357   78747 pod_ready.go:82] duration metric: took 11.650757ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.065378   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065387   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.074595   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074627   78747 pod_ready.go:82] duration metric: took 9.230183ms for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.074643   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074657   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.399077   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399105   78747 pod_ready.go:82] duration metric: took 324.440722ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.399116   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399124   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.797130   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797158   78747 pod_ready.go:82] duration metric: took 398.024149ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.797169   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797176   78747 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:00.197929   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197961   78747 pod_ready.go:82] duration metric: took 400.777243ms for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:34:00.197976   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197992   78747 pod_ready.go:39] duration metric: took 1.260464876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:00.198024   78747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:34:00.210255   78747 ops.go:34] apiserver oom_adj: -16
	I0816 00:34:00.210278   78747 kubeadm.go:597] duration metric: took 9.653197586s to restartPrimaryControlPlane
	I0816 00:34:00.210302   78747 kubeadm.go:394] duration metric: took 9.719364617s to StartCluster
	I0816 00:34:00.210322   78747 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.210405   78747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:00.212730   78747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.213053   78747 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:34:00.213162   78747 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:34:00.213247   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:00.213277   78747 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213292   78747 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213305   78747 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213313   78747 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:34:00.213344   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213352   78747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-616827"
	I0816 00:34:00.213298   78747 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213413   78747 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213435   78747 addons.go:243] addon metrics-server should already be in state true
	I0816 00:34:00.213463   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213751   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213795   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213752   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213886   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213756   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213992   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.215058   78747 out.go:177] * Verifying Kubernetes components...
	I0816 00:34:00.216719   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:00.229428   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0816 00:34:00.229676   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0816 00:34:00.229881   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230164   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230522   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230538   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230689   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230727   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230850   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.231488   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.231512   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.231754   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.232394   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.232426   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.232909   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0816 00:34:00.233400   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.233959   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.233979   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.234368   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.234576   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.238180   78747 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.238203   78747 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:34:00.238230   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.238598   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.238642   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.249682   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0816 00:34:00.250163   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.250894   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.250919   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.251326   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0816 00:34:00.251324   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.251663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.251828   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.252294   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.252318   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.252863   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.253070   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.253746   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.254958   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.255056   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0816 00:34:00.255513   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.256043   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.256083   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.256121   78747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:00.256494   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.257255   78747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:34:00.257377   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.257422   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.259132   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:34:00.259154   78747 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:34:00.259176   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.259204   78747 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.259223   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:34:00.259241   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.263096   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263213   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263688   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263874   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263996   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264175   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264441   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.264511   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264695   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.274557   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0816 00:34:00.274984   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.275444   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.275463   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.275735   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.275946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.277509   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.277745   78747 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.277762   78747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:34:00.277782   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.280264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280660   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.280689   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280790   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.280982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.281140   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.281286   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.445986   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:00.465112   78747 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:00.568927   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.602693   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.620335   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:34:00.620355   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:34:00.667790   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:34:00.667810   78747 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:34:00.698510   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.698536   78747 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:34:00.723319   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.975635   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.975663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976006   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976007   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976030   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.976044   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.976075   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976347   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976340   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976376   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.983280   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.983304   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.983587   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.983586   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.983620   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.678707   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678733   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.678889   78747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.076166351s)
	I0816 00:34:01.678936   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678955   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679115   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679136   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679145   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679153   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679473   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679497   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679484   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679514   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679521   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679525   78747 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-616827"
	I0816 00:34:01.679528   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679537   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679544   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679821   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679862   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679887   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.683006   78747 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 00:33:58.543282   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:58.543753   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:58.543783   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:58.543689   80212 retry.go:31] will retry after 4.402812235s: waiting for machine to come up
	I0816 00:34:00.927244   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:03.428032   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:04.178649   78489 start.go:364] duration metric: took 54.753990439s to acquireMachinesLock for "no-preload-819398"
	I0816 00:34:04.178706   78489 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:34:04.178714   78489 fix.go:54] fixHost starting: 
	I0816 00:34:04.179124   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:04.179162   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:04.195783   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0816 00:34:04.196138   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:04.196590   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:34:04.196614   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:04.196962   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:04.197161   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:04.197303   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:34:04.198795   78489 fix.go:112] recreateIfNeeded on no-preload-819398: state=Stopped err=<nil>
	I0816 00:34:04.198814   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	W0816 00:34:04.198978   78489 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:34:04.200736   78489 out.go:177] * Restarting existing kvm2 VM for "no-preload-819398" ...
	I0816 00:34:01.684641   78747 addons.go:510] duration metric: took 1.471480873s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 00:34:02.473603   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:04.476035   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:02.951078   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951631   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has current primary IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951672   79191 main.go:141] libmachine: (old-k8s-version-098619) Found IP for machine: 192.168.72.137
	I0816 00:34:02.951687   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserving static IP address...
	I0816 00:34:02.952154   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserved static IP address: 192.168.72.137
	I0816 00:34:02.952186   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.952201   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting for SSH to be available...
	I0816 00:34:02.952224   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | skip adding static IP to network mk-old-k8s-version-098619 - found existing host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"}
	I0816 00:34:02.952236   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Getting to WaitForSSH function...
	I0816 00:34:02.954361   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954686   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.954715   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954791   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH client type: external
	I0816 00:34:02.954830   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa (-rw-------)
	I0816 00:34:02.954871   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:02.954890   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | About to run SSH command:
	I0816 00:34:02.954909   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | exit 0
	I0816 00:34:03.078035   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:03.078408   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:34:03.079002   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.081041   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081391   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.081489   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081566   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:34:03.081748   79191 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:03.081767   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.082007   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.084022   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084333   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.084357   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084499   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.084700   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.084867   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.085074   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.085266   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.085509   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.085525   79191 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:03.186066   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:03.186094   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186368   79191 buildroot.go:166] provisioning hostname "old-k8s-version-098619"
	I0816 00:34:03.186397   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186597   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.189330   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189658   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.189702   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189792   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.190004   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190185   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190344   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.190481   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.190665   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.190688   79191 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098619 && echo "old-k8s-version-098619" | sudo tee /etc/hostname
	I0816 00:34:03.304585   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098619
	
	I0816 00:34:03.304608   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.307415   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307732   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.307763   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307955   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.308155   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308314   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308474   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.308629   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.308795   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.308811   79191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098619/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:03.418968   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:03.419010   79191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:03.419045   79191 buildroot.go:174] setting up certificates
	I0816 00:34:03.419058   79191 provision.go:84] configureAuth start
	I0816 00:34:03.419072   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.419338   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.421799   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422159   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.422198   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422401   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.425023   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425417   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.425445   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425557   79191 provision.go:143] copyHostCerts
	I0816 00:34:03.425624   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:03.425646   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:03.425717   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:03.425875   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:03.425888   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:03.425921   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:03.426007   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:03.426017   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:03.426045   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:03.426112   79191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098619 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-098619]
	I0816 00:34:03.509869   79191 provision.go:177] copyRemoteCerts
	I0816 00:34:03.509932   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:03.509961   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.512603   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.512938   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.512984   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.513163   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.513451   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.513617   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.513777   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:03.596330   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 00:34:03.621969   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:03.646778   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:03.671937   79191 provision.go:87] duration metric: took 252.867793ms to configureAuth
	I0816 00:34:03.671964   79191 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:03.672149   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:34:03.672250   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.675207   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675600   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.675625   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675787   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.676006   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676199   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676360   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.676549   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.676762   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.676779   79191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:03.945259   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:03.945287   79191 machine.go:96] duration metric: took 863.526642ms to provisionDockerMachine
	I0816 00:34:03.945298   79191 start.go:293] postStartSetup for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:34:03.945308   79191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:03.945335   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.945638   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:03.945666   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.948590   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.948967   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.948989   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.949152   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.949350   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.949491   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.949645   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.028994   79191 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:04.033776   79191 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:04.033799   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:04.033872   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:04.033943   79191 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:04.034033   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:04.045492   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:04.071879   79191 start.go:296] duration metric: took 126.569157ms for postStartSetup
	I0816 00:34:04.071920   79191 fix.go:56] duration metric: took 19.817260263s for fixHost
	I0816 00:34:04.071944   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.074942   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.075325   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075504   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.075699   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075846   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075977   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.076146   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:04.076319   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:04.076332   79191 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:04.178483   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768444.133390375
	
	I0816 00:34:04.178510   79191 fix.go:216] guest clock: 1723768444.133390375
	I0816 00:34:04.178519   79191 fix.go:229] Guest: 2024-08-16 00:34:04.133390375 +0000 UTC Remote: 2024-08-16 00:34:04.071925107 +0000 UTC m=+252.320651106 (delta=61.465268ms)
	I0816 00:34:04.178537   79191 fix.go:200] guest clock delta is within tolerance: 61.465268ms
	I0816 00:34:04.178541   79191 start.go:83] releasing machines lock for "old-k8s-version-098619", held for 19.923923778s
	I0816 00:34:04.178567   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.178875   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:04.181999   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182458   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.182490   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183192   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183357   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183412   79191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:04.183461   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.183553   79191 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:04.183575   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.186192   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186418   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186507   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186531   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186679   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.186811   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186836   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186850   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187016   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187032   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.187211   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187215   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.187364   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187488   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.283880   79191 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:04.289798   79191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:04.436822   79191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:04.443547   79191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:04.443631   79191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:04.464783   79191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:04.464807   79191 start.go:495] detecting cgroup driver to use...
	I0816 00:34:04.464873   79191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:04.481504   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:04.501871   79191 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:04.501942   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:04.521898   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:04.538186   79191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:04.704361   79191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:04.881682   79191 docker.go:233] disabling docker service ...
	I0816 00:34:04.881757   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:04.900264   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:04.916152   79191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:05.048440   79191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:05.166183   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:05.181888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:05.202525   79191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:34:05.202592   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.214655   79191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:05.214712   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.226052   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.236878   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.249217   79191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:05.260362   79191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:05.271039   79191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:05.271108   79191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:05.290423   79191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:05.307175   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:05.465815   79191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:05.640787   79191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:05.640878   79191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:05.646821   79191 start.go:563] Will wait 60s for crictl version
	I0816 00:34:05.646883   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:05.651455   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:05.698946   79191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:05.699037   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.729185   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.772063   79191 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:34:05.773406   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:05.776689   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777177   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:05.777241   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777435   79191 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:05.782377   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:05.797691   79191 kubeadm.go:883] updating cluster {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:05.797872   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:34:05.797953   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:05.861468   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:05.861557   79191 ssh_runner.go:195] Run: which lz4
	I0816 00:34:05.866880   79191 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:34:05.872036   79191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:34:05.872071   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:34:04.202120   78489 main.go:141] libmachine: (no-preload-819398) Calling .Start
	I0816 00:34:04.202293   78489 main.go:141] libmachine: (no-preload-819398) Ensuring networks are active...
	I0816 00:34:04.203062   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network default is active
	I0816 00:34:04.203345   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network mk-no-preload-819398 is active
	I0816 00:34:04.205286   78489 main.go:141] libmachine: (no-preload-819398) Getting domain xml...
	I0816 00:34:04.206025   78489 main.go:141] libmachine: (no-preload-819398) Creating domain...
	I0816 00:34:05.553661   78489 main.go:141] libmachine: (no-preload-819398) Waiting to get IP...
	I0816 00:34:05.554629   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.555210   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.555309   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.555211   80407 retry.go:31] will retry after 298.759084ms: waiting for machine to come up
	I0816 00:34:05.856046   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.856571   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.856604   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.856530   80407 retry.go:31] will retry after 293.278331ms: waiting for machine to come up
	I0816 00:34:06.151110   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.151542   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.151571   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.151498   80407 retry.go:31] will retry after 332.472371ms: waiting for machine to come up
	I0816 00:34:06.485927   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.486487   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.486514   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.486459   80407 retry.go:31] will retry after 600.720276ms: waiting for machine to come up
	I0816 00:34:05.926954   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.929140   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:06.972334   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:07.469652   78747 node_ready.go:49] node "default-k8s-diff-port-616827" has status "Ready":"True"
	I0816 00:34:07.469684   78747 node_ready.go:38] duration metric: took 7.004536271s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:07.469700   78747 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:07.476054   78747 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482839   78747 pod_ready.go:93] pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.482861   78747 pod_ready.go:82] duration metric: took 6.779315ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482871   78747 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489325   78747 pod_ready.go:93] pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.489348   78747 pod_ready.go:82] duration metric: took 6.470629ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489357   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495536   78747 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.495555   78747 pod_ready.go:82] duration metric: took 6.192295ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495565   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:09.503258   78747 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.631328   79191 crio.go:462] duration metric: took 1.76448771s to copy over tarball
	I0816 00:34:07.631413   79191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:34:10.662435   79191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.030990355s)
	I0816 00:34:10.662472   79191 crio.go:469] duration metric: took 3.031115615s to extract the tarball
	I0816 00:34:10.662482   79191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:34:10.707627   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:10.745704   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:10.745742   79191 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.745838   79191 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.745914   79191 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.745860   79191 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.745943   79191 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.745884   79191 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.746059   79191 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747781   79191 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.747803   79191 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.747808   79191 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.747824   79191 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.747842   79191 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.747883   79191 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.747895   79191 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747948   79191 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.916488   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.923947   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.931668   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.942764   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.948555   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.957593   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.970039   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:34:11.012673   79191 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:34:11.012707   79191 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.012778   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.026267   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:11.135366   79191 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:34:11.135398   79191 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.135451   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.149180   79191 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:34:11.149226   79191 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.149271   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183480   79191 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:34:11.183526   79191 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.183526   79191 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:34:11.183578   79191 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.183584   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183637   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186513   79191 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:34:11.186559   79191 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.186622   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186632   79191 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:34:11.186658   79191 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:34:11.186699   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186722   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.252857   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.252914   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.252935   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.253007   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.253012   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.253083   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.253140   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420527   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.420559   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.420564   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.420638   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420732   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.420791   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.420813   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591141   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.591197   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.591267   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.591337   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.591418   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591453   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.591505   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:34:11.721234   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:34:11.725967   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:34:11.731189   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:34:11.731276   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:34:11.742195   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:34:11.742224   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:34:11.742265   79191 cache_images.go:92] duration metric: took 996.507737ms to LoadCachedImages
	W0816 00:34:11.742327   79191 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0816 00:34:11.742342   79191 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0816 00:34:11.742464   79191 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-098619 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:11.742546   79191 ssh_runner.go:195] Run: crio config
	I0816 00:34:07.089462   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.090073   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.090099   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.089985   80407 retry.go:31] will retry after 666.260439ms: waiting for machine to come up
	I0816 00:34:07.757621   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.758156   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.758182   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.758105   80407 retry.go:31] will retry after 782.571604ms: waiting for machine to come up
	I0816 00:34:08.542021   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:08.542426   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:08.542475   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:08.542381   80407 retry.go:31] will retry after 840.347921ms: waiting for machine to come up
	I0816 00:34:09.384399   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:09.384866   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:09.384893   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:09.384824   80407 retry.go:31] will retry after 1.376690861s: waiting for machine to come up
	I0816 00:34:10.763158   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:10.763547   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:10.763573   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:10.763484   80407 retry.go:31] will retry after 1.237664711s: waiting for machine to come up
	I0816 00:34:10.426656   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:12.429312   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.354758   78747 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.354783   78747 pod_ready.go:82] duration metric: took 3.859210458s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.354796   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363323   78747 pod_ready.go:93] pod "kube-proxy-f99ds" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.363347   78747 pod_ready.go:82] duration metric: took 8.543406ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363359   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369799   78747 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.369826   78747 pod_ready.go:82] duration metric: took 6.458192ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369858   78747 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:13.376479   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.791749   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:34:11.791779   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:11.791791   79191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:11.791810   79191 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098619 NodeName:old-k8s-version-098619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:34:11.791969   79191 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-098619"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:11.792046   79191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:34:11.802572   79191 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:11.802649   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:11.812583   79191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 00:34:11.831551   79191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:11.852476   79191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 00:34:11.875116   79191 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:11.879833   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:11.893308   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:12.038989   79191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:12.061736   79191 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619 for IP: 192.168.72.137
	I0816 00:34:12.061761   79191 certs.go:194] generating shared ca certs ...
	I0816 00:34:12.061780   79191 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.061992   79191 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:12.062046   79191 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:12.062059   79191 certs.go:256] generating profile certs ...
	I0816 00:34:12.062193   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key
	I0816 00:34:12.062283   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4
	I0816 00:34:12.062343   79191 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key
	I0816 00:34:12.062485   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:12.062523   79191 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:12.062536   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:12.062579   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:12.062614   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:12.062658   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:12.062721   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:12.063630   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:12.106539   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:12.139393   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:12.171548   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:12.213113   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 00:34:12.244334   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:34:12.287340   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:12.331047   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:34:12.369666   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:12.397260   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:12.424009   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:12.450212   79191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:12.471550   79191 ssh_runner.go:195] Run: openssl version
	I0816 00:34:12.479821   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:12.494855   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500546   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500620   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.508817   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:12.521689   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:12.533904   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538789   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538946   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.546762   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:12.561940   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:12.575852   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582377   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582457   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.590772   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:12.604976   79191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:12.610332   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:12.617070   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:12.625769   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:12.634342   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:12.641486   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:12.650090   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:12.658206   79191 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:12.658306   79191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:12.658392   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.703323   79191 cri.go:89] found id: ""
	I0816 00:34:12.703399   79191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:12.714950   79191 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:12.714970   79191 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:12.715047   79191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:12.727051   79191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:12.728059   79191 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:12.728655   79191 kubeconfig.go:62] /home/jenkins/minikube-integration/19452-12919/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098619" cluster setting kubeconfig missing "old-k8s-version-098619" context setting]
	I0816 00:34:12.729552   79191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.731269   79191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:12.744732   79191 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0816 00:34:12.744766   79191 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:12.744777   79191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:12.744833   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.783356   79191 cri.go:89] found id: ""
	I0816 00:34:12.783432   79191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:12.801942   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:12.816412   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:12.816433   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:12.816480   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:12.827686   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:12.827757   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:12.838063   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:12.847714   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:12.847808   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:12.858274   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.869328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:12.869389   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.881457   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:12.892256   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:12.892325   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:12.902115   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:12.912484   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.040145   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.851639   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.085396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.208430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.321003   79191 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:14.321084   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:14.822130   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.321780   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.822121   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:16.322077   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:12.002977   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:12.003441   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:12.003470   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:12.003401   80407 retry.go:31] will retry after 1.413320186s: waiting for machine to come up
	I0816 00:34:13.418972   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:13.419346   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:13.419374   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:13.419284   80407 retry.go:31] will retry after 2.055525842s: waiting for machine to come up
	I0816 00:34:15.476550   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:15.477044   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:15.477072   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:15.477021   80407 retry.go:31] will retry after 2.728500649s: waiting for machine to come up
	I0816 00:34:14.926133   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.930322   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:15.377291   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:17.877627   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.821714   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.321166   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.821648   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.321711   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.821520   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.321732   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.821325   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.321783   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.821958   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:21.321139   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.208958   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:18.209350   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:18.209379   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:18.209302   80407 retry.go:31] will retry after 3.922749943s: waiting for machine to come up
	I0816 00:34:19.426265   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.926480   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.134804   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135230   78489 main.go:141] libmachine: (no-preload-819398) Found IP for machine: 192.168.61.15
	I0816 00:34:22.135266   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has current primary IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135292   78489 main.go:141] libmachine: (no-preload-819398) Reserving static IP address...
	I0816 00:34:22.135596   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.135629   78489 main.go:141] libmachine: (no-preload-819398) DBG | skip adding static IP to network mk-no-preload-819398 - found existing host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"}
	I0816 00:34:22.135644   78489 main.go:141] libmachine: (no-preload-819398) Reserved static IP address: 192.168.61.15
	I0816 00:34:22.135661   78489 main.go:141] libmachine: (no-preload-819398) Waiting for SSH to be available...
	I0816 00:34:22.135675   78489 main.go:141] libmachine: (no-preload-819398) DBG | Getting to WaitForSSH function...
	I0816 00:34:22.137639   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.137925   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.137956   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.138099   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH client type: external
	I0816 00:34:22.138141   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa (-rw-------)
	I0816 00:34:22.138198   78489 main.go:141] libmachine: (no-preload-819398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:22.138233   78489 main.go:141] libmachine: (no-preload-819398) DBG | About to run SSH command:
	I0816 00:34:22.138248   78489 main.go:141] libmachine: (no-preload-819398) DBG | exit 0
	I0816 00:34:22.262094   78489 main.go:141] libmachine: (no-preload-819398) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:22.262496   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetConfigRaw
	I0816 00:34:22.263081   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.265419   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.265746   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.265782   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.266097   78489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/config.json ...
	I0816 00:34:22.266283   78489 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:22.266301   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:22.266501   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.268848   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269269   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.269308   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269356   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.269537   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269684   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269803   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.269971   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.270185   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.270197   78489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:22.374848   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:22.374880   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375169   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:34:22.375195   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375407   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.378309   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378649   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.378678   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378853   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.379060   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379203   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379362   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.379568   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.379735   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.379749   78489 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-819398 && echo "no-preload-819398" | sudo tee /etc/hostname
	I0816 00:34:22.496438   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-819398
	
	I0816 00:34:22.496467   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.499101   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499411   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.499443   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499703   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.499912   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500116   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500247   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.500419   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.500624   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.500650   78489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-819398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-819398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-819398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:22.619769   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:22.619802   78489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:22.619826   78489 buildroot.go:174] setting up certificates
	I0816 00:34:22.619837   78489 provision.go:84] configureAuth start
	I0816 00:34:22.619847   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.620106   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.623130   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623485   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.623510   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623629   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.625964   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626308   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.626335   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626475   78489 provision.go:143] copyHostCerts
	I0816 00:34:22.626536   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:22.626557   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:22.626629   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:22.626756   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:22.626768   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:22.626798   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:22.626889   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:22.626899   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:22.626925   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:22.627008   78489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.no-preload-819398 san=[127.0.0.1 192.168.61.15 localhost minikube no-preload-819398]
	I0816 00:34:22.710036   78489 provision.go:177] copyRemoteCerts
	I0816 00:34:22.710093   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:22.710120   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.712944   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713380   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.713409   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713612   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.713780   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.713926   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.714082   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:22.800996   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:22.828264   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:34:22.855258   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:22.880981   78489 provision.go:87] duration metric: took 261.134406ms to configureAuth
	I0816 00:34:22.881013   78489 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:22.881176   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:22.881240   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.883962   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884348   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.884368   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884611   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.884828   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885052   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885248   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.885448   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.885639   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.885661   78489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:23.154764   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:23.154802   78489 machine.go:96] duration metric: took 888.504728ms to provisionDockerMachine
	I0816 00:34:23.154821   78489 start.go:293] postStartSetup for "no-preload-819398" (driver="kvm2")
	I0816 00:34:23.154837   78489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:23.154860   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.155176   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:23.155205   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.158105   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158482   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.158517   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158674   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.158864   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.159039   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.159198   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.241041   78489 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:23.245237   78489 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:23.245260   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:23.245324   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:23.245398   78489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:23.245480   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:23.254735   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:23.279620   78489 start.go:296] duration metric: took 124.783636ms for postStartSetup
	I0816 00:34:23.279668   78489 fix.go:56] duration metric: took 19.100951861s for fixHost
	I0816 00:34:23.279693   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.282497   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.282959   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.282981   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.283184   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.283376   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283514   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283687   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.283870   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:23.284027   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:23.284037   78489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:23.390632   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768463.360038650
	
	I0816 00:34:23.390658   78489 fix.go:216] guest clock: 1723768463.360038650
	I0816 00:34:23.390668   78489 fix.go:229] Guest: 2024-08-16 00:34:23.36003865 +0000 UTC Remote: 2024-08-16 00:34:23.27967333 +0000 UTC m=+356.445975156 (delta=80.36532ms)
	I0816 00:34:23.390697   78489 fix.go:200] guest clock delta is within tolerance: 80.36532ms
	I0816 00:34:23.390710   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 19.212026147s
	I0816 00:34:23.390729   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.390977   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:23.393728   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394050   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.394071   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394255   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394722   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394895   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394977   78489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:23.395028   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.395135   78489 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:23.395151   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.397773   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.397939   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398196   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398237   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398354   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398480   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398507   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398515   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398717   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.398722   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398887   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398884   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.399029   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.399164   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.497983   78489 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:23.503896   78489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:23.660357   78489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:23.666714   78489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:23.666775   78489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:23.684565   78489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:23.684586   78489 start.go:495] detecting cgroup driver to use...
	I0816 00:34:23.684655   78489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:23.701981   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:23.715786   78489 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:23.715852   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:23.733513   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:23.748705   78489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:23.866341   78489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:24.016845   78489 docker.go:233] disabling docker service ...
	I0816 00:34:24.016918   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:24.032673   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:24.046465   78489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:24.184862   78489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:24.309066   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:24.323818   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:24.344352   78489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:34:24.344422   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.355015   78489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:24.355093   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.365665   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.377238   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.388619   78489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:24.399306   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.410087   78489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.428465   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.439026   78489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:24.448856   78489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:24.448943   78489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:24.463002   78489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:24.473030   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:24.587542   78489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:24.719072   78489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:24.719159   78489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:24.723789   78489 start.go:563] Will wait 60s for crictl version
	I0816 00:34:24.723842   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:24.727616   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:24.766517   78489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:24.766600   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.795204   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.824529   78489 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:34:20.376278   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.376510   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:24.876314   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.822114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.321350   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.821541   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.322014   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.821938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.321883   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.821178   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.321881   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.821199   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:26.321573   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.825725   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:24.828458   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829018   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:24.829045   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829336   78489 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:24.833711   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:24.847017   78489 kubeadm.go:883] updating cluster {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:24.847136   78489 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:34:24.847171   78489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:24.883489   78489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:34:24.883515   78489 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:24.883592   78489 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.883612   78489 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.883664   78489 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:24.883690   78489 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.883719   78489 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.883595   78489 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.883927   78489 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.884016   78489 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885061   78489 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.885185   78489 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885207   78489 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.885204   78489 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.885225   78489 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.042311   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.042317   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.048181   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.050502   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.059137   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.091688   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 00:34:25.096653   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.126261   78489 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 00:34:25.126311   78489 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.126368   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.164673   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.189972   78489 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 00:34:25.190014   78489 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.190051   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249632   78489 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 00:34:25.249674   78489 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.249717   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249780   78489 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 00:34:25.249824   78489 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.249884   78489 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 00:34:25.249910   78489 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.249887   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249942   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360038   78489 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 00:34:25.360082   78489 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.360121   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360133   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.360191   78489 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 00:34:25.360208   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.360221   78489 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.360256   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360283   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.360326   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.360337   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.462610   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.462691   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.480037   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.480114   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.480176   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.480211   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.489343   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.642853   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.642913   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.642963   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.645719   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.645749   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.645833   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.645899   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.802574   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 00:34:25.802645   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.802687   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.802728   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.808235   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 00:34:25.808330   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 00:34:25.808387   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 00:34:25.808401   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 00:34:25.808432   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:25.808334   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:25.808471   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:25.808480   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:25.816510   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 00:34:25.816527   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.816560   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.885445   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 00:34:25.885532   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 00:34:25.885549   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:25.885588   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 00:34:25.885600   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:25.885674   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 00:34:25.885690   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 00:34:25.885711   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 00:34:24.426102   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.927534   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.877013   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:29.378108   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.821489   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.322094   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.321201   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.821854   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.321188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.821729   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.321316   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.821998   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:31.322184   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.938767   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.122182459s)
	I0816 00:34:27.938804   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 00:34:27.938801   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.05323098s)
	I0816 00:34:27.938826   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.05321158s)
	I0816 00:34:27.938831   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 00:34:27.938833   78489 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:27.938843   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 00:34:27.938906   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:31.645449   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.706515577s)
	I0816 00:34:31.645486   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 00:34:31.645514   78489 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:31.645563   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:29.427463   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.927253   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.875608   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:33.876822   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.821361   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.321205   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.822088   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.322126   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.821956   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.321921   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.821245   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.822034   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:36.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.625714   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.980118908s)
	I0816 00:34:33.625749   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 00:34:33.625773   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:33.625824   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:35.680134   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054281396s)
	I0816 00:34:35.680167   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 00:34:35.680209   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:35.680276   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:34.426416   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.427589   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:38.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:35.877327   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:37.877385   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.821567   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.321329   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.822169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.321832   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.821404   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.321406   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.821914   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.322169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.821149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:41.322125   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.430152   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.749849436s)
	I0816 00:34:37.430180   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 00:34:37.430208   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:37.430254   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:39.684335   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.254047221s)
	I0816 00:34:39.684365   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 00:34:39.684391   78489 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:39.684445   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:40.328672   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 00:34:40.328722   78489 cache_images.go:123] Successfully loaded all cached images
	I0816 00:34:40.328729   78489 cache_images.go:92] duration metric: took 15.445200533s to LoadCachedImages
	I0816 00:34:40.328743   78489 kubeadm.go:934] updating node { 192.168.61.15 8443 v1.31.0 crio true true} ...
	I0816 00:34:40.328897   78489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-819398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:40.328994   78489 ssh_runner.go:195] Run: crio config
	I0816 00:34:40.383655   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:40.383675   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:40.383685   78489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:40.383712   78489 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.15 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-819398 NodeName:no-preload-819398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:34:40.383855   78489 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-819398"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:40.383930   78489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:34:40.395384   78489 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:40.395457   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:40.405037   78489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 00:34:40.423278   78489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:40.440963   78489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 00:34:40.458845   78489 ssh_runner.go:195] Run: grep 192.168.61.15	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:40.462574   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:40.475524   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:40.614624   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:40.632229   78489 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398 for IP: 192.168.61.15
	I0816 00:34:40.632252   78489 certs.go:194] generating shared ca certs ...
	I0816 00:34:40.632267   78489 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:40.632430   78489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:40.632483   78489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:40.632497   78489 certs.go:256] generating profile certs ...
	I0816 00:34:40.632598   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/client.key
	I0816 00:34:40.632679   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key.a9de72ef
	I0816 00:34:40.632759   78489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key
	I0816 00:34:40.632919   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:40.632962   78489 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:40.632978   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:40.633011   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:40.633042   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:40.633068   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:40.633124   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:40.633963   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:40.676094   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:40.707032   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:40.740455   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:40.778080   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 00:34:40.809950   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:34:40.841459   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:40.866708   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:34:40.893568   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:40.917144   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:40.942349   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:40.966731   78489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:40.984268   78489 ssh_runner.go:195] Run: openssl version
	I0816 00:34:40.990614   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:41.002909   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007595   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007645   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.013618   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:41.024886   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:41.036350   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040801   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040845   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.046554   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:41.057707   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:41.069566   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074107   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074159   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.080113   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:41.091854   78489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:41.096543   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:41.102883   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:41.109228   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:41.115622   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:41.121895   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:41.128016   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:41.134126   78489 kubeadm.go:392] StartCluster: {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:41.134230   78489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:41.134310   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.178898   78489 cri.go:89] found id: ""
	I0816 00:34:41.178972   78489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:41.190167   78489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:41.190184   78489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:41.190223   78489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:41.200385   78489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:41.201824   78489 kubeconfig.go:125] found "no-preload-819398" server: "https://192.168.61.15:8443"
	I0816 00:34:41.204812   78489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:41.225215   78489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.15
	I0816 00:34:41.225252   78489 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:41.225265   78489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:41.225323   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.269288   78489 cri.go:89] found id: ""
	I0816 00:34:41.269377   78489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:41.286238   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:41.297713   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:41.297732   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:41.297782   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:41.308635   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:41.308695   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:41.320045   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:41.329866   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:41.329952   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:41.341488   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.351018   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:41.351083   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.360845   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:41.370730   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:41.370808   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:41.382572   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:41.392544   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.515558   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.425671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:43.426507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:40.377638   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:42.877395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:41.821459   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.321938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.822038   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.321447   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.821571   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.321428   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.821496   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:46.322149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.610068   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.094473643s)
	I0816 00:34:42.610106   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.850562   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.916519   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:43.042025   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:43.042117   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.543065   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.043098   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.061154   78489 api_server.go:72] duration metric: took 1.019134992s to wait for apiserver process to appear ...
	I0816 00:34:44.061180   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:34:44.061199   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.718683   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.718717   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:46.718730   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.785528   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.785559   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:47.061692   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.066556   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.066590   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:47.562057   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.569664   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.569699   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:48.061258   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:48.065926   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:34:48.073136   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:34:48.073165   78489 api_server.go:131] duration metric: took 4.011977616s to wait for apiserver health ...
	I0816 00:34:48.073179   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:48.073189   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:48.075105   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:34:45.925817   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.925984   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:45.376424   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.377794   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.876764   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:46.822140   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.321575   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.321365   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.822009   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.321536   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.821189   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.321387   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.821982   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:51.322075   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.076340   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:34:48.113148   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:34:48.152316   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:34:48.166108   78489 system_pods.go:59] 8 kube-system pods found
	I0816 00:34:48.166142   78489 system_pods.go:61] "coredns-6f6b679f8f-sv454" [5ba1d55f-4455-4ad1-b3c8-7671ce481dd2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:34:48.166154   78489 system_pods.go:61] "etcd-no-preload-819398" [b5e55df3-fb20-4980-928f-31217bf25351] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:34:48.166164   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [7670f41c-8439-4782-a3c8-077a144d2998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:34:48.166175   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [61a6080a-5e65-4400-b230-0703f347fc17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:34:48.166182   78489 system_pods.go:61] "kube-proxy-xdm7w" [9d0517c5-8cf7-47a0-86d0-c674677e9f46] Running
	I0816 00:34:48.166191   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [af346e37-312a-4225-b3bf-0ddda71022dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:34:48.166204   78489 system_pods.go:61] "metrics-server-6867b74b74-mm5l7" [2ebc3f9f-e1a7-47b6-849e-6a4995d13206] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:34:48.166214   78489 system_pods.go:61] "storage-provisioner" [745bbfbd-aedb-4e68-946e-5a7ead1d5b48] Running
	I0816 00:34:48.166223   78489 system_pods.go:74] duration metric: took 13.883212ms to wait for pod list to return data ...
	I0816 00:34:48.166235   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:34:48.170444   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:34:48.170478   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:34:48.170492   78489 node_conditions.go:105] duration metric: took 4.251703ms to run NodePressure ...
	I0816 00:34:48.170520   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:48.437519   78489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:34:48.441992   78489 kubeadm.go:739] kubelet initialised
	I0816 00:34:48.442015   78489 kubeadm.go:740] duration metric: took 4.465986ms waiting for restarted kubelet to initialise ...
	I0816 00:34:48.442025   78489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:48.447127   78489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:50.453956   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.926184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.926515   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.876909   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.376236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.822066   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.321534   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.821154   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.321256   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.821510   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.321984   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.821175   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.321601   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:56.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.454122   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.954716   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.426224   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.926472   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.376394   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:58.876502   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.821891   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.321266   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.821346   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.321718   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.821304   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.821302   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.821563   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:01.321323   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.453951   78489 pod_ready.go:93] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:57.453974   78489 pod_ready.go:82] duration metric: took 9.00682228s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:57.453983   78489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.460582   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.961243   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:00.961269   78489 pod_ready.go:82] duration metric: took 3.507278873s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:00.961279   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468020   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:01.468047   78489 pod_ready.go:82] duration metric: took 506.758881ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468060   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.425956   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.925967   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.876678   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:03.376662   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.821317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.321560   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.821707   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.322110   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.821327   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.321430   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.821935   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.321559   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.821373   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.975498   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.975522   78489 pod_ready.go:82] duration metric: took 1.50745395s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.975531   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980290   78489 pod_ready.go:93] pod "kube-proxy-xdm7w" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.980316   78489 pod_ready.go:82] duration metric: took 4.778704ms for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980328   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988237   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.988260   78489 pod_ready.go:82] duration metric: took 7.924207ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988268   78489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:04.993992   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:04.426419   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.426648   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.927578   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:05.877102   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:07.877187   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.821405   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.321781   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.821420   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.321483   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.821347   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.321167   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.821188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.821179   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:11.322114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.994539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.995530   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.494248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.425605   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:13.426338   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:10.378729   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:12.875673   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:14.876717   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.822105   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.321963   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.822172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.321805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.821971   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:14.321784   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:14.321882   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:14.360939   79191 cri.go:89] found id: ""
	I0816 00:35:14.360962   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.360971   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:14.360976   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:14.361028   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:14.397796   79191 cri.go:89] found id: ""
	I0816 00:35:14.397824   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.397836   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:14.397858   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:14.397922   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:14.433924   79191 cri.go:89] found id: ""
	I0816 00:35:14.433950   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.433960   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:14.433968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:14.434024   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:14.468657   79191 cri.go:89] found id: ""
	I0816 00:35:14.468685   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.468696   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:14.468704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:14.468770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:14.505221   79191 cri.go:89] found id: ""
	I0816 00:35:14.505247   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.505256   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:14.505264   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:14.505323   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:14.546032   79191 cri.go:89] found id: ""
	I0816 00:35:14.546062   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.546072   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:14.546079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:14.546147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:14.581260   79191 cri.go:89] found id: ""
	I0816 00:35:14.581284   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.581292   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:14.581298   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:14.581352   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:14.616103   79191 cri.go:89] found id: ""
	I0816 00:35:14.616127   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.616134   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:14.616142   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:14.616153   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:14.690062   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:14.690106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:14.735662   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:14.735699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:14.786049   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:14.786086   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:14.800375   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:14.800405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:14.931822   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:13.494676   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.497759   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.925671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.926279   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.375842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.376005   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.432686   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:17.448728   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:17.448806   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:17.496384   79191 cri.go:89] found id: ""
	I0816 00:35:17.496523   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.496568   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:17.496581   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:17.496646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:17.560779   79191 cri.go:89] found id: ""
	I0816 00:35:17.560810   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.560820   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:17.560829   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:17.560891   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:17.606007   79191 cri.go:89] found id: ""
	I0816 00:35:17.606036   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.606047   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:17.606054   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:17.606123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:17.639910   79191 cri.go:89] found id: ""
	I0816 00:35:17.639937   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.639945   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:17.639951   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:17.640030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:17.676534   79191 cri.go:89] found id: ""
	I0816 00:35:17.676563   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.676573   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:17.676581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:17.676645   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:17.716233   79191 cri.go:89] found id: ""
	I0816 00:35:17.716255   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.716262   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:17.716268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:17.716334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:17.753648   79191 cri.go:89] found id: ""
	I0816 00:35:17.753686   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.753696   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:17.753704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:17.753763   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:17.791670   79191 cri.go:89] found id: ""
	I0816 00:35:17.791694   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.791702   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:17.791711   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:17.791722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:17.840616   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:17.840650   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:17.854949   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:17.854981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:17.933699   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:17.933724   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:17.933750   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:18.010177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:18.010211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:20.551384   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:20.564463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:20.564540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:20.604361   79191 cri.go:89] found id: ""
	I0816 00:35:20.604389   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.604399   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:20.604405   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:20.604453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:20.639502   79191 cri.go:89] found id: ""
	I0816 00:35:20.639528   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.639535   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:20.639541   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:20.639590   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:20.676430   79191 cri.go:89] found id: ""
	I0816 00:35:20.676476   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.676484   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:20.676496   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:20.676551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:20.711213   79191 cri.go:89] found id: ""
	I0816 00:35:20.711243   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.711253   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:20.711261   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:20.711320   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:20.745533   79191 cri.go:89] found id: ""
	I0816 00:35:20.745563   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.745574   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:20.745581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:20.745644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:20.781031   79191 cri.go:89] found id: ""
	I0816 00:35:20.781056   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.781064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:20.781071   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:20.781119   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:20.819966   79191 cri.go:89] found id: ""
	I0816 00:35:20.819994   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.820005   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:20.820012   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:20.820096   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:20.859011   79191 cri.go:89] found id: ""
	I0816 00:35:20.859041   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.859052   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:20.859063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:20.859078   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:20.909479   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:20.909513   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:20.925627   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:20.925653   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:21.001707   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:21.001733   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:21.001747   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:21.085853   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:21.085893   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:17.994492   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:20.496255   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:22.426663   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:21.878587   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.377462   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:23.626499   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:23.640337   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:23.640395   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:23.679422   79191 cri.go:89] found id: ""
	I0816 00:35:23.679449   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.679457   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:23.679463   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:23.679522   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:23.716571   79191 cri.go:89] found id: ""
	I0816 00:35:23.716594   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.716601   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:23.716607   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:23.716660   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:23.752539   79191 cri.go:89] found id: ""
	I0816 00:35:23.752563   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.752573   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:23.752581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:23.752640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:23.790665   79191 cri.go:89] found id: ""
	I0816 00:35:23.790693   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.790700   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:23.790707   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:23.790757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:23.827695   79191 cri.go:89] found id: ""
	I0816 00:35:23.827719   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.827727   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:23.827733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:23.827792   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:23.867664   79191 cri.go:89] found id: ""
	I0816 00:35:23.867687   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.867695   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:23.867701   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:23.867776   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:23.907844   79191 cri.go:89] found id: ""
	I0816 00:35:23.907871   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.907882   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:23.907890   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:23.907951   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:23.945372   79191 cri.go:89] found id: ""
	I0816 00:35:23.945403   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.945414   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:23.945424   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:23.945438   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:23.998270   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:23.998302   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:24.012794   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:24.012824   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:24.087285   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:24.087308   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:24.087340   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:24.167151   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:24.167184   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:26.710285   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:26.724394   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:26.724453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:26.764667   79191 cri.go:89] found id: ""
	I0816 00:35:26.764690   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.764698   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:26.764704   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:26.764756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:22.994036   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.995035   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.927042   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:27.426054   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.877007   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.376563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.806631   79191 cri.go:89] found id: ""
	I0816 00:35:26.806660   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.806670   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:26.806677   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:26.806741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:26.843434   79191 cri.go:89] found id: ""
	I0816 00:35:26.843473   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.843485   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:26.843493   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:26.843576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:26.882521   79191 cri.go:89] found id: ""
	I0816 00:35:26.882556   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.882566   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:26.882574   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:26.882635   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:26.917956   79191 cri.go:89] found id: ""
	I0816 00:35:26.917985   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.917995   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:26.918004   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:26.918056   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:26.953168   79191 cri.go:89] found id: ""
	I0816 00:35:26.953191   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.953199   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:26.953205   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:26.953251   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:26.991366   79191 cri.go:89] found id: ""
	I0816 00:35:26.991397   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.991408   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:26.991416   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:26.991479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:27.028591   79191 cri.go:89] found id: ""
	I0816 00:35:27.028619   79191 logs.go:276] 0 containers: []
	W0816 00:35:27.028626   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:27.028635   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:27.028647   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:27.111613   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:27.111645   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:27.153539   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:27.153575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:27.209377   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:27.209420   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:27.223316   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:27.223343   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:27.301411   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:29.801803   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:29.815545   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:29.815626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:29.853638   79191 cri.go:89] found id: ""
	I0816 00:35:29.853668   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.853678   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:29.853687   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:29.853756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:29.892532   79191 cri.go:89] found id: ""
	I0816 00:35:29.892554   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.892561   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:29.892567   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:29.892622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:29.932486   79191 cri.go:89] found id: ""
	I0816 00:35:29.932511   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.932519   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:29.932524   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:29.932580   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:29.973161   79191 cri.go:89] found id: ""
	I0816 00:35:29.973194   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.973205   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:29.973213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:29.973275   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:30.009606   79191 cri.go:89] found id: ""
	I0816 00:35:30.009629   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.009637   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:30.009643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:30.009691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:30.045016   79191 cri.go:89] found id: ""
	I0816 00:35:30.045043   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.045050   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:30.045057   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:30.045113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:30.079934   79191 cri.go:89] found id: ""
	I0816 00:35:30.079959   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.079968   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:30.079974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:30.080030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:30.114173   79191 cri.go:89] found id: ""
	I0816 00:35:30.114199   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.114207   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:30.114216   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:30.114227   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:30.154765   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:30.154791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:30.204410   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:30.204442   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:30.218909   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:30.218934   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:30.294141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:30.294161   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:30.294193   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:26.995394   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.494569   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.426234   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.926349   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.926433   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.376976   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.377869   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:32.872216   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:32.886211   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:32.886289   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:32.929416   79191 cri.go:89] found id: ""
	I0816 00:35:32.929440   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.929449   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:32.929456   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:32.929520   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:32.977862   79191 cri.go:89] found id: ""
	I0816 00:35:32.977887   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.977896   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:32.977920   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:32.977978   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:33.015569   79191 cri.go:89] found id: ""
	I0816 00:35:33.015593   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.015603   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:33.015622   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:33.015681   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:33.050900   79191 cri.go:89] found id: ""
	I0816 00:35:33.050934   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.050943   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:33.050959   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:33.051033   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:33.084529   79191 cri.go:89] found id: ""
	I0816 00:35:33.084556   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.084564   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:33.084569   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:33.084619   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:33.119819   79191 cri.go:89] found id: ""
	I0816 00:35:33.119845   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.119855   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:33.119863   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:33.119928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:33.159922   79191 cri.go:89] found id: ""
	I0816 00:35:33.159952   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.159959   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:33.159965   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:33.160023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:33.194977   79191 cri.go:89] found id: ""
	I0816 00:35:33.195006   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.195018   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:33.195030   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:33.195044   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:33.208578   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:33.208623   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:33.282177   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:33.282198   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:33.282211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:33.365514   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:33.365552   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:33.405190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:33.405226   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:35.959033   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:35.971866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:35.971934   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:36.008442   79191 cri.go:89] found id: ""
	I0816 00:35:36.008473   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.008483   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:36.008489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:36.008547   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:36.044346   79191 cri.go:89] found id: ""
	I0816 00:35:36.044374   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.044386   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:36.044393   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:36.044444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:36.083078   79191 cri.go:89] found id: ""
	I0816 00:35:36.083104   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.083112   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:36.083118   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:36.083166   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:36.120195   79191 cri.go:89] found id: ""
	I0816 00:35:36.120218   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.120226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:36.120232   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:36.120288   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:36.156186   79191 cri.go:89] found id: ""
	I0816 00:35:36.156215   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.156225   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:36.156233   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:36.156295   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:36.195585   79191 cri.go:89] found id: ""
	I0816 00:35:36.195613   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.195623   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:36.195631   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:36.195699   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:36.231110   79191 cri.go:89] found id: ""
	I0816 00:35:36.231133   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.231141   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:36.231147   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:36.231210   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:36.268745   79191 cri.go:89] found id: ""
	I0816 00:35:36.268770   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.268778   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:36.268786   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:36.268800   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:36.282225   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:36.282251   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:36.351401   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:36.351431   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:36.351447   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:36.429970   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:36.430003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:36.473745   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:36.473776   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:31.994163   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.994256   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.995188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:36.427247   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.926123   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.877303   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:39.027444   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:39.041107   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:39.041170   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:39.079807   79191 cri.go:89] found id: ""
	I0816 00:35:39.079830   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.079837   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:39.079843   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:39.079890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:39.115532   79191 cri.go:89] found id: ""
	I0816 00:35:39.115559   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.115569   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:39.115576   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:39.115623   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:39.150197   79191 cri.go:89] found id: ""
	I0816 00:35:39.150222   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.150233   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:39.150241   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:39.150300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:39.186480   79191 cri.go:89] found id: ""
	I0816 00:35:39.186507   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.186515   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:39.186521   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:39.186572   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:39.221576   79191 cri.go:89] found id: ""
	I0816 00:35:39.221605   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.221615   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:39.221620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:39.221669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:39.259846   79191 cri.go:89] found id: ""
	I0816 00:35:39.259877   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.259888   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:39.259896   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:39.259950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:39.294866   79191 cri.go:89] found id: ""
	I0816 00:35:39.294891   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.294898   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:39.294903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:39.294952   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:39.329546   79191 cri.go:89] found id: ""
	I0816 00:35:39.329576   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.329584   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:39.329593   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:39.329604   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:39.371579   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:39.371609   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:39.422903   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:39.422935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:39.437673   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:39.437699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:39.515146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:39.515171   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:39.515185   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:38.495377   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.495856   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.926444   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:43.426438   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.376648   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.877521   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.101733   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:42.115563   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:42.115640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:42.155187   79191 cri.go:89] found id: ""
	I0816 00:35:42.155216   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.155224   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:42.155230   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:42.155282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:42.194414   79191 cri.go:89] found id: ""
	I0816 00:35:42.194444   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.194456   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:42.194464   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:42.194523   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:42.234219   79191 cri.go:89] found id: ""
	I0816 00:35:42.234245   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.234253   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:42.234259   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:42.234314   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:42.272278   79191 cri.go:89] found id: ""
	I0816 00:35:42.272304   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.272314   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:42.272322   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:42.272381   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:42.309973   79191 cri.go:89] found id: ""
	I0816 00:35:42.309999   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.310007   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:42.310013   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:42.310066   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:42.350745   79191 cri.go:89] found id: ""
	I0816 00:35:42.350773   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.350782   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:42.350790   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:42.350853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:42.387775   79191 cri.go:89] found id: ""
	I0816 00:35:42.387803   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.387813   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:42.387832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:42.387902   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:42.425086   79191 cri.go:89] found id: ""
	I0816 00:35:42.425110   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.425118   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:42.425125   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:42.425138   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:42.515543   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:42.515575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:42.558348   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:42.558372   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:42.613026   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:42.613059   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.628907   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:42.628932   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:42.710265   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.211083   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:45.225001   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:45.225083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:45.258193   79191 cri.go:89] found id: ""
	I0816 00:35:45.258223   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.258232   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:45.258240   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:45.258297   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:45.294255   79191 cri.go:89] found id: ""
	I0816 00:35:45.294278   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.294286   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:45.294291   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:45.294335   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:45.329827   79191 cri.go:89] found id: ""
	I0816 00:35:45.329875   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.329886   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:45.329894   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:45.329944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:45.366095   79191 cri.go:89] found id: ""
	I0816 00:35:45.366124   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.366134   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:45.366141   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:45.366202   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:45.402367   79191 cri.go:89] found id: ""
	I0816 00:35:45.402390   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.402398   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:45.402403   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:45.402449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:45.439272   79191 cri.go:89] found id: ""
	I0816 00:35:45.439293   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.439300   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:45.439310   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:45.439358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:45.474351   79191 cri.go:89] found id: ""
	I0816 00:35:45.474380   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.474388   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:45.474393   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:45.474445   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:45.519636   79191 cri.go:89] found id: ""
	I0816 00:35:45.519661   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.519671   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:45.519680   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:45.519695   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:45.593425   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.593446   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:45.593458   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:45.668058   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:45.668095   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:45.716090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:45.716125   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:45.774177   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:45.774207   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.495914   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:44.996641   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.426740   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.925719   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.376025   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.376628   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.876035   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:48.288893   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:48.302256   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:48.302321   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:48.337001   79191 cri.go:89] found id: ""
	I0816 00:35:48.337030   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.337041   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:48.337048   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:48.337110   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:48.378341   79191 cri.go:89] found id: ""
	I0816 00:35:48.378367   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.378375   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:48.378384   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:48.378447   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:48.414304   79191 cri.go:89] found id: ""
	I0816 00:35:48.414383   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.414402   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:48.414410   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:48.414473   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:48.453946   79191 cri.go:89] found id: ""
	I0816 00:35:48.453969   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.453976   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:48.453982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:48.454036   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:48.489597   79191 cri.go:89] found id: ""
	I0816 00:35:48.489617   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.489623   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:48.489629   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:48.489672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:48.524195   79191 cri.go:89] found id: ""
	I0816 00:35:48.524222   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.524232   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:48.524239   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:48.524293   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:48.567854   79191 cri.go:89] found id: ""
	I0816 00:35:48.567880   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.567890   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:48.567897   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:48.567956   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:48.603494   79191 cri.go:89] found id: ""
	I0816 00:35:48.603520   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.603530   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:48.603540   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:48.603556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:48.642927   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:48.642960   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:48.693761   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:48.693791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:48.708790   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:48.708818   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:48.780072   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:48.780092   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:48.780106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.362108   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:51.376113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:51.376185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:51.413988   79191 cri.go:89] found id: ""
	I0816 00:35:51.414022   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.414033   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:51.414041   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:51.414101   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:51.460901   79191 cri.go:89] found id: ""
	I0816 00:35:51.460937   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.460948   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:51.460956   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:51.461019   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:51.497178   79191 cri.go:89] found id: ""
	I0816 00:35:51.497205   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.497215   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:51.497223   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:51.497365   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:51.534559   79191 cri.go:89] found id: ""
	I0816 00:35:51.534589   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.534600   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:51.534607   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:51.534668   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:51.570258   79191 cri.go:89] found id: ""
	I0816 00:35:51.570280   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.570287   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:51.570293   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:51.570356   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:51.609639   79191 cri.go:89] found id: ""
	I0816 00:35:51.609665   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.609675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:51.609683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:51.609742   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:51.645629   79191 cri.go:89] found id: ""
	I0816 00:35:51.645652   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.645659   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:51.645664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:51.645731   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:51.683325   79191 cri.go:89] found id: ""
	I0816 00:35:51.683344   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.683351   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:51.683358   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:51.683369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:51.739101   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:51.739133   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:51.753436   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:51.753466   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:35:47.494904   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.495416   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.926975   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:51.928318   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:52.376854   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.880623   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:35:51.831242   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:51.831268   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:51.831294   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.926924   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:51.926970   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:54.472667   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:54.486706   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:54.486785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:54.524180   79191 cri.go:89] found id: ""
	I0816 00:35:54.524203   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.524211   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:54.524216   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:54.524273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:54.563758   79191 cri.go:89] found id: ""
	I0816 00:35:54.563781   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.563788   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:54.563795   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:54.563859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:54.599442   79191 cri.go:89] found id: ""
	I0816 00:35:54.599471   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.599481   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:54.599488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:54.599553   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:54.633521   79191 cri.go:89] found id: ""
	I0816 00:35:54.633547   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.633558   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:54.633565   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:54.633628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:54.670036   79191 cri.go:89] found id: ""
	I0816 00:35:54.670064   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.670075   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:54.670083   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:54.670148   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:54.707565   79191 cri.go:89] found id: ""
	I0816 00:35:54.707587   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.707594   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:54.707600   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:54.707659   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:54.744500   79191 cri.go:89] found id: ""
	I0816 00:35:54.744530   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.744541   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:54.744548   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:54.744612   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:54.778964   79191 cri.go:89] found id: ""
	I0816 00:35:54.778988   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.778995   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:54.779007   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:54.779020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:54.831806   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:54.831838   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:54.845954   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:54.845979   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:54.921817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:54.921855   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:54.921871   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:55.006401   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:55.006439   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:51.996591   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.495673   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.427044   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:56.927184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.376333   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.548661   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:57.562489   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:57.562549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:57.597855   79191 cri.go:89] found id: ""
	I0816 00:35:57.597881   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.597891   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:57.597899   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:57.597961   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:57.634085   79191 cri.go:89] found id: ""
	I0816 00:35:57.634114   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.634126   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:57.634133   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:57.634193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:57.671748   79191 cri.go:89] found id: ""
	I0816 00:35:57.671779   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.671788   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:57.671795   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:57.671859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:57.708836   79191 cri.go:89] found id: ""
	I0816 00:35:57.708862   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.708870   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:57.708877   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:57.708940   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:57.744601   79191 cri.go:89] found id: ""
	I0816 00:35:57.744630   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.744639   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:57.744645   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:57.744706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:57.781888   79191 cri.go:89] found id: ""
	I0816 00:35:57.781919   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.781929   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:57.781937   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:57.781997   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:57.822612   79191 cri.go:89] found id: ""
	I0816 00:35:57.822634   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.822641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:57.822647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:57.822706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:57.873968   79191 cri.go:89] found id: ""
	I0816 00:35:57.873998   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.874008   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:57.874019   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:57.874037   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:57.896611   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:57.896643   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:57.995575   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:57.995597   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:57.995612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:58.077196   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:58.077230   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:58.116956   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:58.116985   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:00.664805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:00.678425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:00.678501   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:00.715522   79191 cri.go:89] found id: ""
	I0816 00:36:00.715548   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.715557   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:00.715562   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:00.715608   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:00.749892   79191 cri.go:89] found id: ""
	I0816 00:36:00.749920   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.749931   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:00.749938   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:00.750006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:00.787302   79191 cri.go:89] found id: ""
	I0816 00:36:00.787325   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.787332   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:00.787338   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:00.787392   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:00.821866   79191 cri.go:89] found id: ""
	I0816 00:36:00.821894   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.821906   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:00.821914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:00.821971   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:00.856346   79191 cri.go:89] found id: ""
	I0816 00:36:00.856369   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.856377   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:00.856382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:00.856431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:00.893569   79191 cri.go:89] found id: ""
	I0816 00:36:00.893596   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.893606   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:00.893614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:00.893677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:00.930342   79191 cri.go:89] found id: ""
	I0816 00:36:00.930367   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.930378   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:00.930386   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:00.930622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:00.966039   79191 cri.go:89] found id: ""
	I0816 00:36:00.966071   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.966085   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:00.966095   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:00.966109   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:01.045594   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:01.045631   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:01.089555   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:01.089586   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:01.141597   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:01.141633   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:01.156260   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:01.156286   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:01.230573   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:56.995077   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:58.995897   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.495116   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.426099   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.926011   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.927327   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.376842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.875993   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.730825   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:03.744766   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:03.744838   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:03.781095   79191 cri.go:89] found id: ""
	I0816 00:36:03.781124   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.781142   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:03.781150   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:03.781215   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:03.815637   79191 cri.go:89] found id: ""
	I0816 00:36:03.815669   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.815680   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:03.815687   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:03.815741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:03.850076   79191 cri.go:89] found id: ""
	I0816 00:36:03.850110   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.850122   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:03.850130   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:03.850185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:03.888840   79191 cri.go:89] found id: ""
	I0816 00:36:03.888863   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.888872   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:03.888879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:03.888941   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:03.928317   79191 cri.go:89] found id: ""
	I0816 00:36:03.928341   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.928350   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:03.928359   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:03.928413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:03.964709   79191 cri.go:89] found id: ""
	I0816 00:36:03.964741   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.964751   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:03.964760   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:03.964830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:03.999877   79191 cri.go:89] found id: ""
	I0816 00:36:03.999902   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.999912   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:03.999919   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:03.999981   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:04.036772   79191 cri.go:89] found id: ""
	I0816 00:36:04.036799   79191 logs.go:276] 0 containers: []
	W0816 00:36:04.036810   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:04.036820   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:04.036833   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:04.118843   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:04.118879   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:04.162491   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:04.162548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:04.215100   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:04.215134   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:04.229043   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:04.229069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:04.307480   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:03.495661   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.995711   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.426223   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:08.426470   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.876718   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:07.877431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.807640   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:06.821144   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:06.821203   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:06.857743   79191 cri.go:89] found id: ""
	I0816 00:36:06.857776   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.857786   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:06.857794   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:06.857872   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:06.895980   79191 cri.go:89] found id: ""
	I0816 00:36:06.896007   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.896018   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:06.896025   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:06.896090   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:06.935358   79191 cri.go:89] found id: ""
	I0816 00:36:06.935389   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.935399   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:06.935406   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:06.935461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:06.971533   79191 cri.go:89] found id: ""
	I0816 00:36:06.971561   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.971572   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:06.971580   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:06.971640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:07.007786   79191 cri.go:89] found id: ""
	I0816 00:36:07.007812   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.007823   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:07.007830   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:07.007890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:07.044060   79191 cri.go:89] found id: ""
	I0816 00:36:07.044092   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.044104   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:07.044112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:07.044185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:07.080058   79191 cri.go:89] found id: ""
	I0816 00:36:07.080085   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.080094   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:07.080101   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:07.080156   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:07.117749   79191 cri.go:89] found id: ""
	I0816 00:36:07.117773   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.117780   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:07.117787   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:07.117799   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:07.171418   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:07.171453   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:07.185520   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:07.185542   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:07.257817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:07.257872   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:07.257888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:07.339530   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:07.339576   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:09.882613   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:09.895873   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:09.895950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:09.936739   79191 cri.go:89] found id: ""
	I0816 00:36:09.936766   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.936774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:09.936780   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:09.936836   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:09.974145   79191 cri.go:89] found id: ""
	I0816 00:36:09.974168   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.974180   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:09.974186   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:09.974243   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:10.012166   79191 cri.go:89] found id: ""
	I0816 00:36:10.012196   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.012206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:10.012214   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:10.012265   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:10.051080   79191 cri.go:89] found id: ""
	I0816 00:36:10.051103   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.051111   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:10.051117   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:10.051176   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:10.088519   79191 cri.go:89] found id: ""
	I0816 00:36:10.088548   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.088559   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:10.088567   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:10.088628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:10.123718   79191 cri.go:89] found id: ""
	I0816 00:36:10.123744   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.123752   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:10.123758   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:10.123805   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:10.161900   79191 cri.go:89] found id: ""
	I0816 00:36:10.161922   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.161929   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:10.161995   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:10.162064   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:10.196380   79191 cri.go:89] found id: ""
	I0816 00:36:10.196408   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.196419   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:10.196429   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:10.196443   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:10.248276   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:10.248309   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:10.262241   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:10.262269   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:10.340562   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:10.340598   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:10.340626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:10.417547   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:10.417578   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:07.996930   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:09.997666   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.426502   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.426976   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.377172   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.877236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.962310   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:12.976278   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:12.976338   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:13.014501   79191 cri.go:89] found id: ""
	I0816 00:36:13.014523   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.014530   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:13.014536   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:13.014587   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:13.055942   79191 cri.go:89] found id: ""
	I0816 00:36:13.055970   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.055979   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:13.055987   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:13.056048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:13.090309   79191 cri.go:89] found id: ""
	I0816 00:36:13.090336   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.090346   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:13.090354   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:13.090413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:13.124839   79191 cri.go:89] found id: ""
	I0816 00:36:13.124865   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.124876   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:13.124884   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:13.124945   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:13.164535   79191 cri.go:89] found id: ""
	I0816 00:36:13.164560   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.164567   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:13.164573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:13.164630   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:13.198651   79191 cri.go:89] found id: ""
	I0816 00:36:13.198699   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.198710   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:13.198718   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:13.198785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:13.233255   79191 cri.go:89] found id: ""
	I0816 00:36:13.233278   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.233286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:13.233292   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:13.233348   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:13.267327   79191 cri.go:89] found id: ""
	I0816 00:36:13.267351   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.267359   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:13.267367   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:13.267384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:13.352053   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:13.352089   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:13.393438   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:13.393471   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:13.445397   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:13.445430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:13.459143   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:13.459177   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:13.530160   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.031296   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:16.045557   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:16.045618   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:16.081828   79191 cri.go:89] found id: ""
	I0816 00:36:16.081871   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.081882   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:16.081890   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:16.081949   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:16.116228   79191 cri.go:89] found id: ""
	I0816 00:36:16.116254   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.116264   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:16.116272   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:16.116334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:16.150051   79191 cri.go:89] found id: ""
	I0816 00:36:16.150079   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.150087   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:16.150093   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:16.150139   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:16.186218   79191 cri.go:89] found id: ""
	I0816 00:36:16.186241   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.186248   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:16.186254   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:16.186301   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:16.223223   79191 cri.go:89] found id: ""
	I0816 00:36:16.223255   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.223263   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:16.223270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:16.223316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:16.259929   79191 cri.go:89] found id: ""
	I0816 00:36:16.259953   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.259960   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:16.259970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:16.260099   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:16.294611   79191 cri.go:89] found id: ""
	I0816 00:36:16.294633   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.294641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:16.294649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:16.294725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:16.333492   79191 cri.go:89] found id: ""
	I0816 00:36:16.333523   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.333533   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:16.333544   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:16.333563   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:16.385970   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:16.386002   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:16.400359   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:16.400384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:16.471363   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.471388   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:16.471408   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:16.555990   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:16.556022   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:12.495406   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.995145   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.926160   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.426768   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:15.376672   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.876395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.876542   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.099502   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:19.112649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:19.112706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:19.145809   79191 cri.go:89] found id: ""
	I0816 00:36:19.145837   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.145858   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:19.145865   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:19.145928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:19.183737   79191 cri.go:89] found id: ""
	I0816 00:36:19.183763   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.183774   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:19.183781   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:19.183841   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:19.219729   79191 cri.go:89] found id: ""
	I0816 00:36:19.219756   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.219764   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:19.219770   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:19.219815   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:19.254450   79191 cri.go:89] found id: ""
	I0816 00:36:19.254474   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.254481   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:19.254488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:19.254540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:19.289543   79191 cri.go:89] found id: ""
	I0816 00:36:19.289573   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.289585   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:19.289592   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:19.289651   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:19.330727   79191 cri.go:89] found id: ""
	I0816 00:36:19.330748   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.330756   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:19.330762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:19.330809   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:19.368952   79191 cri.go:89] found id: ""
	I0816 00:36:19.368978   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.368986   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:19.368992   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:19.369048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:19.406211   79191 cri.go:89] found id: ""
	I0816 00:36:19.406247   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.406258   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:19.406268   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:19.406282   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:19.457996   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:19.458032   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:19.472247   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:19.472274   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:19.542840   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:19.542862   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:19.542876   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:19.624478   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:19.624520   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:16.997148   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.496434   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.427251   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:21.925550   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.925858   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.376318   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:24.376431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.165884   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:22.180005   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:22.180078   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:22.217434   79191 cri.go:89] found id: ""
	I0816 00:36:22.217463   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.217471   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:22.217478   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:22.217534   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:22.250679   79191 cri.go:89] found id: ""
	I0816 00:36:22.250708   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.250717   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:22.250725   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:22.250785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:22.284294   79191 cri.go:89] found id: ""
	I0816 00:36:22.284324   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.284334   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:22.284341   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:22.284403   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:22.320747   79191 cri.go:89] found id: ""
	I0816 00:36:22.320779   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.320790   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:22.320799   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:22.320858   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:22.355763   79191 cri.go:89] found id: ""
	I0816 00:36:22.355793   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.355803   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:22.355811   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:22.355871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:22.392762   79191 cri.go:89] found id: ""
	I0816 00:36:22.392788   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.392796   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:22.392802   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:22.392860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:22.426577   79191 cri.go:89] found id: ""
	I0816 00:36:22.426605   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.426614   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:22.426621   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:22.426682   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:22.459989   79191 cri.go:89] found id: ""
	I0816 00:36:22.460018   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.460030   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:22.460040   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:22.460054   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:22.545782   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:22.545820   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:22.587404   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:22.587431   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:22.638519   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:22.638559   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:22.653064   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:22.653087   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:22.734333   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.234823   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:25.248716   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:25.248787   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:25.284760   79191 cri.go:89] found id: ""
	I0816 00:36:25.284786   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.284793   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:25.284799   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:25.284870   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:25.325523   79191 cri.go:89] found id: ""
	I0816 00:36:25.325548   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.325556   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:25.325562   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:25.325621   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:25.365050   79191 cri.go:89] found id: ""
	I0816 00:36:25.365078   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.365088   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:25.365096   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:25.365155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:25.405005   79191 cri.go:89] found id: ""
	I0816 00:36:25.405038   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.405049   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:25.405062   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:25.405121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:25.444622   79191 cri.go:89] found id: ""
	I0816 00:36:25.444648   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.444656   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:25.444662   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:25.444710   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:25.485364   79191 cri.go:89] found id: ""
	I0816 00:36:25.485394   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.485404   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:25.485413   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:25.485492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:25.521444   79191 cri.go:89] found id: ""
	I0816 00:36:25.521471   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.521482   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:25.521490   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:25.521550   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:25.556763   79191 cri.go:89] found id: ""
	I0816 00:36:25.556789   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.556796   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:25.556805   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:25.556817   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:25.606725   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:25.606759   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:25.623080   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:25.623108   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:25.705238   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.705258   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:25.705280   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:25.782188   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:25.782224   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:21.994519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.995061   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.494442   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:25.926835   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.427012   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.876206   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.876563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.325018   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:28.337778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:28.337860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:28.378452   79191 cri.go:89] found id: ""
	I0816 00:36:28.378482   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.378492   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:28.378499   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:28.378556   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:28.412103   79191 cri.go:89] found id: ""
	I0816 00:36:28.412132   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.412143   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:28.412150   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:28.412214   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:28.447363   79191 cri.go:89] found id: ""
	I0816 00:36:28.447388   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.447396   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:28.447401   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:28.447452   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:28.481199   79191 cri.go:89] found id: ""
	I0816 00:36:28.481228   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.481242   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:28.481251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:28.481305   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:28.517523   79191 cri.go:89] found id: ""
	I0816 00:36:28.517545   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.517552   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:28.517558   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:28.517620   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:28.552069   79191 cri.go:89] found id: ""
	I0816 00:36:28.552101   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.552112   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:28.552120   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:28.552193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:28.594124   79191 cri.go:89] found id: ""
	I0816 00:36:28.594148   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.594158   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:28.594166   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:28.594228   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:28.631451   79191 cri.go:89] found id: ""
	I0816 00:36:28.631472   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.631480   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:28.631488   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:28.631498   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:28.685335   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:28.685368   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:28.700852   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:28.700877   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:28.773932   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:28.773957   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:28.773972   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:28.848951   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:28.848989   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:31.389208   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:31.403731   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:31.403813   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:31.440979   79191 cri.go:89] found id: ""
	I0816 00:36:31.441010   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.441020   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:31.441028   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:31.441092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:31.476435   79191 cri.go:89] found id: ""
	I0816 00:36:31.476458   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.476465   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:31.476471   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:31.476530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:31.514622   79191 cri.go:89] found id: ""
	I0816 00:36:31.514644   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.514651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:31.514657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:31.514715   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:31.554503   79191 cri.go:89] found id: ""
	I0816 00:36:31.554533   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.554543   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:31.554551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:31.554609   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:31.590283   79191 cri.go:89] found id: ""
	I0816 00:36:31.590317   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.590325   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:31.590332   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:31.590380   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:31.625969   79191 cri.go:89] found id: ""
	I0816 00:36:31.626003   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.626014   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:31.626031   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:31.626102   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:31.660489   79191 cri.go:89] found id: ""
	I0816 00:36:31.660513   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.660520   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:31.660526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:31.660583   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:31.694728   79191 cri.go:89] found id: ""
	I0816 00:36:31.694761   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.694769   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:31.694779   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:31.694790   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:31.760631   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:31.760663   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:31.774858   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:31.774886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:36:28.994228   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.994276   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.926313   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.426045   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.877175   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.378602   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:36:31.851125   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:31.851145   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:31.851156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:31.934491   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:31.934521   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:34.476368   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:34.489252   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:34.489308   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:34.524932   79191 cri.go:89] found id: ""
	I0816 00:36:34.524964   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.524972   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:34.524977   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:34.525032   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:34.559434   79191 cri.go:89] found id: ""
	I0816 00:36:34.559462   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.559473   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:34.559481   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:34.559543   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:34.598700   79191 cri.go:89] found id: ""
	I0816 00:36:34.598728   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.598739   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:34.598747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:34.598808   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:34.632413   79191 cri.go:89] found id: ""
	I0816 00:36:34.632438   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.632448   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:34.632456   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:34.632514   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:34.668385   79191 cri.go:89] found id: ""
	I0816 00:36:34.668409   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.668418   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:34.668425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:34.668486   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:34.703728   79191 cri.go:89] found id: ""
	I0816 00:36:34.703754   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.703764   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:34.703772   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:34.703832   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:34.743119   79191 cri.go:89] found id: ""
	I0816 00:36:34.743152   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.743161   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:34.743171   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:34.743230   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:34.778932   79191 cri.go:89] found id: ""
	I0816 00:36:34.778955   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.778963   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:34.778971   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:34.778987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:34.832050   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:34.832084   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:34.845700   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:34.845728   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:34.917535   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:34.917554   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:34.917565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:35.005262   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:35.005295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:32.994435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:34.994503   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.926422   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.876400   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:38.376351   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.547107   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:37.562035   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:37.562095   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:37.605992   79191 cri.go:89] found id: ""
	I0816 00:36:37.606021   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.606028   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:37.606035   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:37.606092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:37.642613   79191 cri.go:89] found id: ""
	I0816 00:36:37.642642   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.642653   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:37.642660   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:37.642708   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:37.677810   79191 cri.go:89] found id: ""
	I0816 00:36:37.677863   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.677875   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:37.677883   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:37.677939   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:37.714490   79191 cri.go:89] found id: ""
	I0816 00:36:37.714514   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.714522   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:37.714529   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:37.714575   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:37.750807   79191 cri.go:89] found id: ""
	I0816 00:36:37.750837   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.750844   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:37.750850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:37.750912   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:37.790307   79191 cri.go:89] found id: ""
	I0816 00:36:37.790337   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.790347   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:37.790355   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:37.790404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:37.826811   79191 cri.go:89] found id: ""
	I0816 00:36:37.826838   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.826848   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:37.826856   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:37.826920   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:37.862066   79191 cri.go:89] found id: ""
	I0816 00:36:37.862091   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.862101   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:37.862112   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:37.862127   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:37.917127   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:37.917161   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:37.932986   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:37.933024   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:38.008715   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:38.008739   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:38.008754   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:38.088744   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:38.088778   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:40.643426   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:40.659064   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:40.659128   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:40.702486   79191 cri.go:89] found id: ""
	I0816 00:36:40.702513   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.702523   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:40.702530   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:40.702595   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:40.736016   79191 cri.go:89] found id: ""
	I0816 00:36:40.736044   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.736057   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:40.736064   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:40.736125   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:40.779665   79191 cri.go:89] found id: ""
	I0816 00:36:40.779704   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.779724   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:40.779733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:40.779795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:40.818612   79191 cri.go:89] found id: ""
	I0816 00:36:40.818633   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.818640   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:40.818647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:40.818695   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:40.855990   79191 cri.go:89] found id: ""
	I0816 00:36:40.856014   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.856021   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:40.856027   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:40.856074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:40.894792   79191 cri.go:89] found id: ""
	I0816 00:36:40.894827   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.894836   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:40.894845   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:40.894894   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:40.932233   79191 cri.go:89] found id: ""
	I0816 00:36:40.932255   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.932263   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:40.932268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:40.932324   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:40.974601   79191 cri.go:89] found id: ""
	I0816 00:36:40.974624   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.974633   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:40.974642   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:40.974660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:41.049185   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:41.049209   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:41.049223   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:41.129446   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:41.129481   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:41.170312   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:41.170341   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:41.226217   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:41.226254   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:36.995268   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:39.494273   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:41.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.426501   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.926122   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.877227   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.878644   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:43.741485   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:43.756248   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:43.756325   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:43.792440   79191 cri.go:89] found id: ""
	I0816 00:36:43.792469   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.792480   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:43.792488   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:43.792549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:43.829906   79191 cri.go:89] found id: ""
	I0816 00:36:43.829933   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.829941   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:43.829947   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:43.830003   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:43.880305   79191 cri.go:89] found id: ""
	I0816 00:36:43.880330   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.880337   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:43.880343   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:43.880399   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:43.937899   79191 cri.go:89] found id: ""
	I0816 00:36:43.937929   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.937939   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:43.937953   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:43.938023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:43.997578   79191 cri.go:89] found id: ""
	I0816 00:36:43.997603   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.997610   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:43.997620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:43.997672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:44.035606   79191 cri.go:89] found id: ""
	I0816 00:36:44.035629   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.035637   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:44.035643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:44.035692   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:44.072919   79191 cri.go:89] found id: ""
	I0816 00:36:44.072950   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.072961   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:44.072968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:44.073043   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:44.108629   79191 cri.go:89] found id: ""
	I0816 00:36:44.108659   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.108681   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:44.108692   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:44.108705   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:44.149127   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:44.149151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:44.201694   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:44.201737   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:44.217161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:44.217199   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:44.284335   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:44.284362   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:44.284379   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:43.996478   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.494382   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:44.926542   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.926713   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:45.376030   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:47.875418   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.877201   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.869196   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:46.883519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:46.883584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:46.924767   79191 cri.go:89] found id: ""
	I0816 00:36:46.924806   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.924821   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:46.924829   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:46.924889   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:46.963282   79191 cri.go:89] found id: ""
	I0816 00:36:46.963309   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.963320   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:46.963327   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:46.963389   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:47.001421   79191 cri.go:89] found id: ""
	I0816 00:36:47.001450   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.001458   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:47.001463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:47.001518   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:47.037679   79191 cri.go:89] found id: ""
	I0816 00:36:47.037702   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.037713   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:47.037720   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:47.037778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:47.078009   79191 cri.go:89] found id: ""
	I0816 00:36:47.078039   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.078050   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:47.078056   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:47.078113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:47.119032   79191 cri.go:89] found id: ""
	I0816 00:36:47.119056   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.119064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:47.119069   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:47.119127   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:47.154893   79191 cri.go:89] found id: ""
	I0816 00:36:47.154919   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.154925   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:47.154933   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:47.154993   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:47.194544   79191 cri.go:89] found id: ""
	I0816 00:36:47.194571   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.194582   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:47.194592   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:47.194612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:47.267148   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:47.267172   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:47.267186   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:47.345257   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:47.345295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:47.386207   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:47.386233   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:47.436171   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:47.436201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:49.949977   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:49.965702   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:49.965761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:50.002443   79191 cri.go:89] found id: ""
	I0816 00:36:50.002470   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.002481   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:50.002489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:50.002548   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:50.039123   79191 cri.go:89] found id: ""
	I0816 00:36:50.039155   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.039162   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:50.039168   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:50.039220   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:50.074487   79191 cri.go:89] found id: ""
	I0816 00:36:50.074517   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.074527   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:50.074535   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:50.074593   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:50.108980   79191 cri.go:89] found id: ""
	I0816 00:36:50.109008   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.109018   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:50.109025   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:50.109082   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:50.149182   79191 cri.go:89] found id: ""
	I0816 00:36:50.149202   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.149209   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:50.149215   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:50.149261   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:50.183066   79191 cri.go:89] found id: ""
	I0816 00:36:50.183094   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.183102   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:50.183108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:50.183165   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:50.220200   79191 cri.go:89] found id: ""
	I0816 00:36:50.220231   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.220240   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:50.220246   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:50.220302   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:50.258059   79191 cri.go:89] found id: ""
	I0816 00:36:50.258083   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.258092   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:50.258100   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:50.258110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:50.300560   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:50.300591   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:50.350548   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:50.350581   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:50.364792   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:50.364816   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:50.437723   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:50.437746   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:50.437761   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:48.995009   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:50.995542   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.425926   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:51.427896   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.926363   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:52.375826   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:54.876435   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.015846   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:53.029184   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:53.029246   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:53.064306   79191 cri.go:89] found id: ""
	I0816 00:36:53.064338   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.064346   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:53.064352   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:53.064404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:53.104425   79191 cri.go:89] found id: ""
	I0816 00:36:53.104458   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.104468   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:53.104476   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:53.104538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:53.139470   79191 cri.go:89] found id: ""
	I0816 00:36:53.139493   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.139500   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:53.139506   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:53.139551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:53.185195   79191 cri.go:89] found id: ""
	I0816 00:36:53.185225   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.185234   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:53.185242   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:53.185300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:53.221897   79191 cri.go:89] found id: ""
	I0816 00:36:53.221925   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.221935   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:53.221943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:53.222006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:53.258810   79191 cri.go:89] found id: ""
	I0816 00:36:53.258841   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.258852   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:53.258859   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:53.258924   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:53.298672   79191 cri.go:89] found id: ""
	I0816 00:36:53.298701   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.298711   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:53.298719   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:53.298778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:53.333498   79191 cri.go:89] found id: ""
	I0816 00:36:53.333520   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.333527   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:53.333535   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:53.333548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.370495   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:53.370530   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:53.423938   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:53.423982   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:53.438897   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:53.438926   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:53.505951   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:53.505973   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:53.505987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.089638   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:56.103832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:56.103893   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:56.148010   79191 cri.go:89] found id: ""
	I0816 00:36:56.148038   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.148048   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:56.148057   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:56.148120   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:56.185631   79191 cri.go:89] found id: ""
	I0816 00:36:56.185663   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.185673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:56.185680   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:56.185739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:56.222064   79191 cri.go:89] found id: ""
	I0816 00:36:56.222093   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.222104   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:56.222112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:56.222162   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:56.260462   79191 cri.go:89] found id: ""
	I0816 00:36:56.260494   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.260504   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:56.260513   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:56.260574   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:56.296125   79191 cri.go:89] found id: ""
	I0816 00:36:56.296154   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.296164   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:56.296172   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:56.296236   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:56.333278   79191 cri.go:89] found id: ""
	I0816 00:36:56.333305   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.333316   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:56.333324   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:56.333385   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:56.368924   79191 cri.go:89] found id: ""
	I0816 00:36:56.368952   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.368962   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:56.368970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:56.369034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:56.407148   79191 cri.go:89] found id: ""
	I0816 00:36:56.407180   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.407190   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:56.407201   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:56.407215   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:56.464745   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:56.464779   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:56.478177   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:56.478204   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:56.555827   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:56.555851   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:56.555864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.640001   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:56.640040   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.495546   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.994786   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:58.426865   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:57.376484   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.876765   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.181423   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:59.195722   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:59.195804   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:59.232043   79191 cri.go:89] found id: ""
	I0816 00:36:59.232067   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.232075   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:59.232081   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:59.232132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:59.270628   79191 cri.go:89] found id: ""
	I0816 00:36:59.270656   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.270673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:59.270681   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:59.270743   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:59.304054   79191 cri.go:89] found id: ""
	I0816 00:36:59.304089   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.304100   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:59.304108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:59.304169   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:59.339386   79191 cri.go:89] found id: ""
	I0816 00:36:59.339410   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.339417   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:59.339423   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:59.339483   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:59.381313   79191 cri.go:89] found id: ""
	I0816 00:36:59.381361   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.381376   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:59.381385   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:59.381449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:59.417060   79191 cri.go:89] found id: ""
	I0816 00:36:59.417090   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.417101   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:59.417109   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:59.417160   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:59.461034   79191 cri.go:89] found id: ""
	I0816 00:36:59.461060   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.461071   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:59.461078   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:59.461136   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:59.496248   79191 cri.go:89] found id: ""
	I0816 00:36:59.496276   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.496286   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:59.496297   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:59.496312   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:59.566779   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:59.566803   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:59.566829   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:59.651999   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:59.652034   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:59.693286   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:59.693310   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:59.746677   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:59.746711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:58.494370   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.494959   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.927036   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:03.425008   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.376921   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.876676   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.262527   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:02.277903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:02.277965   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:02.323846   79191 cri.go:89] found id: ""
	I0816 00:37:02.323868   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.323876   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:02.323882   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:02.323938   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:02.359552   79191 cri.go:89] found id: ""
	I0816 00:37:02.359578   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.359589   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:02.359596   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:02.359657   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:02.395062   79191 cri.go:89] found id: ""
	I0816 00:37:02.395087   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.395094   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:02.395100   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:02.395155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:02.432612   79191 cri.go:89] found id: ""
	I0816 00:37:02.432636   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.432646   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:02.432654   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:02.432712   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:02.468612   79191 cri.go:89] found id: ""
	I0816 00:37:02.468640   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.468651   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:02.468659   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:02.468716   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:02.514472   79191 cri.go:89] found id: ""
	I0816 00:37:02.514500   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.514511   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:02.514519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:02.514576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:02.551964   79191 cri.go:89] found id: ""
	I0816 00:37:02.551993   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.552003   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:02.552011   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:02.552061   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:02.588018   79191 cri.go:89] found id: ""
	I0816 00:37:02.588044   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.588053   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:02.588063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:02.588081   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:02.638836   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:02.638875   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:02.653581   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:02.653613   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:02.737018   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.737047   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:02.737065   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:02.819726   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:02.819763   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.364943   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:05.379433   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:05.379492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:05.419165   79191 cri.go:89] found id: ""
	I0816 00:37:05.419191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.419198   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:05.419204   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:05.419264   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:05.454417   79191 cri.go:89] found id: ""
	I0816 00:37:05.454438   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.454446   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:05.454452   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:05.454497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:05.490162   79191 cri.go:89] found id: ""
	I0816 00:37:05.490191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.490203   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:05.490210   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:05.490268   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:05.527303   79191 cri.go:89] found id: ""
	I0816 00:37:05.527327   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.527334   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:05.527340   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:05.527393   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:05.562271   79191 cri.go:89] found id: ""
	I0816 00:37:05.562302   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.562310   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:05.562316   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:05.562374   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:05.597800   79191 cri.go:89] found id: ""
	I0816 00:37:05.597823   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.597830   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:05.597837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:05.597905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:05.633996   79191 cri.go:89] found id: ""
	I0816 00:37:05.634021   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.634028   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:05.634034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:05.634088   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:05.672408   79191 cri.go:89] found id: ""
	I0816 00:37:05.672437   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.672446   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:05.672457   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:05.672472   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:05.750956   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:05.750995   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.795573   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:05.795603   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:05.848560   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:05.848593   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:05.862245   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:05.862268   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:05.938704   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.495728   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.994839   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:05.425507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:07.426459   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:06.877664   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.375601   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:08.439692   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:08.452850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:08.452927   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:08.490015   79191 cri.go:89] found id: ""
	I0816 00:37:08.490043   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.490053   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:08.490060   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:08.490121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:08.529631   79191 cri.go:89] found id: ""
	I0816 00:37:08.529665   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.529676   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:08.529689   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:08.529747   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:08.564858   79191 cri.go:89] found id: ""
	I0816 00:37:08.564885   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.564896   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:08.564904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:08.564966   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:08.601144   79191 cri.go:89] found id: ""
	I0816 00:37:08.601180   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.601190   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:08.601200   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:08.601257   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:08.637050   79191 cri.go:89] found id: ""
	I0816 00:37:08.637081   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.637090   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:08.637098   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:08.637158   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:08.670613   79191 cri.go:89] found id: ""
	I0816 00:37:08.670644   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.670655   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:08.670663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:08.670727   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:08.704664   79191 cri.go:89] found id: ""
	I0816 00:37:08.704690   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.704698   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:08.704704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:08.704754   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:08.741307   79191 cri.go:89] found id: ""
	I0816 00:37:08.741337   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.741348   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:08.741360   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:08.741374   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:08.755434   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:08.755459   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:08.828118   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:08.828140   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:08.828151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:08.911565   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:08.911605   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:08.954907   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:08.954937   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.508848   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:11.521998   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:11.522060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:11.558581   79191 cri.go:89] found id: ""
	I0816 00:37:11.558611   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.558622   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:11.558630   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:11.558697   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:11.593798   79191 cri.go:89] found id: ""
	I0816 00:37:11.593822   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.593830   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:11.593836   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:11.593905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:11.629619   79191 cri.go:89] found id: ""
	I0816 00:37:11.629648   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.629658   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:11.629664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:11.629717   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:11.666521   79191 cri.go:89] found id: ""
	I0816 00:37:11.666548   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.666556   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:11.666562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:11.666607   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:11.703374   79191 cri.go:89] found id: ""
	I0816 00:37:11.703406   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.703417   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:11.703427   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:11.703491   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:11.739374   79191 cri.go:89] found id: ""
	I0816 00:37:11.739403   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.739413   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:11.739420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:11.739475   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:11.774981   79191 cri.go:89] found id: ""
	I0816 00:37:11.775006   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.775013   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:11.775019   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:11.775074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:06.995675   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.495024   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:12.428179   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.377241   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.875723   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.809561   79191 cri.go:89] found id: ""
	I0816 00:37:11.809590   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.809601   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:11.809612   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:11.809626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.863071   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:11.863116   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:11.878161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:11.878191   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:11.953572   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.953594   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:11.953608   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:12.035815   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:12.035848   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:14.576547   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:14.590747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:14.590802   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:14.626732   79191 cri.go:89] found id: ""
	I0816 00:37:14.626762   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.626774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:14.626781   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:14.626833   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:14.662954   79191 cri.go:89] found id: ""
	I0816 00:37:14.662978   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.662988   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:14.662996   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:14.663057   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:14.697618   79191 cri.go:89] found id: ""
	I0816 00:37:14.697646   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.697656   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:14.697663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:14.697725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:14.735137   79191 cri.go:89] found id: ""
	I0816 00:37:14.735161   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.735168   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:14.735174   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:14.735222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:14.770625   79191 cri.go:89] found id: ""
	I0816 00:37:14.770648   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.770655   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:14.770660   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:14.770718   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:14.808678   79191 cri.go:89] found id: ""
	I0816 00:37:14.808708   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.808718   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:14.808726   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:14.808795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:14.847321   79191 cri.go:89] found id: ""
	I0816 00:37:14.847349   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.847360   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:14.847368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:14.847425   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:14.886110   79191 cri.go:89] found id: ""
	I0816 00:37:14.886136   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.886147   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:14.886156   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:14.886175   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:14.971978   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:14.972013   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:15.015620   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:15.015644   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:15.067372   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:15.067405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:15.081629   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:15.081652   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:15.151580   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.995551   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.995831   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.495016   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:14.926297   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.926367   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:18.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:15.876514   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.877987   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.652362   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:17.666201   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:17.666278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:17.698723   79191 cri.go:89] found id: ""
	I0816 00:37:17.698760   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.698772   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:17.698778   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:17.698827   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:17.732854   79191 cri.go:89] found id: ""
	I0816 00:37:17.732883   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.732893   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:17.732901   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:17.732957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:17.767665   79191 cri.go:89] found id: ""
	I0816 00:37:17.767691   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.767701   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:17.767709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:17.767769   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:17.801490   79191 cri.go:89] found id: ""
	I0816 00:37:17.801512   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.801520   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:17.801526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:17.801579   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:17.837451   79191 cri.go:89] found id: ""
	I0816 00:37:17.837479   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.837490   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:17.837498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:17.837562   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:17.872898   79191 cri.go:89] found id: ""
	I0816 00:37:17.872924   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.872934   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:17.872943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:17.873002   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:17.910325   79191 cri.go:89] found id: ""
	I0816 00:37:17.910352   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.910362   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:17.910370   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:17.910431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:17.946885   79191 cri.go:89] found id: ""
	I0816 00:37:17.946909   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.946916   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:17.946923   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:17.946935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:18.014011   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:18.014045   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:18.028850   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:18.028886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:18.099362   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:18.099381   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:18.099396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:18.180552   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:18.180588   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:20.720810   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:20.733806   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:20.733887   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:20.771300   79191 cri.go:89] found id: ""
	I0816 00:37:20.771323   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.771330   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:20.771336   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:20.771394   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:20.812327   79191 cri.go:89] found id: ""
	I0816 00:37:20.812355   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.812362   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:20.812369   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:20.812430   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:20.846830   79191 cri.go:89] found id: ""
	I0816 00:37:20.846861   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.846872   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:20.846879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:20.846948   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:20.889979   79191 cri.go:89] found id: ""
	I0816 00:37:20.890005   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.890015   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:20.890023   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:20.890086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:20.933732   79191 cri.go:89] found id: ""
	I0816 00:37:20.933762   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.933772   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:20.933778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:20.933824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:20.972341   79191 cri.go:89] found id: ""
	I0816 00:37:20.972368   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.972376   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:20.972382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:20.972444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:21.011179   79191 cri.go:89] found id: ""
	I0816 00:37:21.011207   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.011216   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:21.011224   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:21.011282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:21.045645   79191 cri.go:89] found id: ""
	I0816 00:37:21.045668   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.045675   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:21.045684   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:21.045694   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:21.099289   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:21.099321   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:21.113814   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:21.113858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:21.186314   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:21.186337   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:21.186355   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:21.271116   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:21.271152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:18.994476   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.996435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:21.425187   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.425456   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.377999   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:22.877014   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.818598   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:23.832330   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:23.832387   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:23.869258   79191 cri.go:89] found id: ""
	I0816 00:37:23.869279   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.869286   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:23.869293   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:23.869342   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:23.903958   79191 cri.go:89] found id: ""
	I0816 00:37:23.903989   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.903999   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:23.904006   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:23.904060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:23.943110   79191 cri.go:89] found id: ""
	I0816 00:37:23.943142   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.943153   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:23.943160   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:23.943222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:23.979325   79191 cri.go:89] found id: ""
	I0816 00:37:23.979356   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.979366   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:23.979374   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:23.979435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:24.017570   79191 cri.go:89] found id: ""
	I0816 00:37:24.017597   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.017607   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:24.017614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:24.017684   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:24.051522   79191 cri.go:89] found id: ""
	I0816 00:37:24.051546   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.051555   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:24.051562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:24.051626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:24.087536   79191 cri.go:89] found id: ""
	I0816 00:37:24.087561   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.087572   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:24.087579   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:24.087644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:24.123203   79191 cri.go:89] found id: ""
	I0816 00:37:24.123233   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.123245   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:24.123256   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:24.123276   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:24.178185   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:24.178225   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:24.192895   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:24.192920   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:24.273471   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:24.273492   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:24.273504   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:24.357890   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:24.357936   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:23.495269   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.994859   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.427328   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.927068   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.376932   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.377168   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:29.876182   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:26.950399   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:26.964347   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:26.964406   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:27.004694   79191 cri.go:89] found id: ""
	I0816 00:37:27.004722   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.004738   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:27.004745   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:27.004800   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:27.040051   79191 cri.go:89] found id: ""
	I0816 00:37:27.040080   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.040090   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:27.040096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:27.040144   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:27.088614   79191 cri.go:89] found id: ""
	I0816 00:37:27.088642   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.088651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:27.088657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:27.088732   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:27.125427   79191 cri.go:89] found id: ""
	I0816 00:37:27.125450   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.125457   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:27.125464   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:27.125511   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:27.158562   79191 cri.go:89] found id: ""
	I0816 00:37:27.158592   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.158602   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:27.158609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:27.158672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:27.192986   79191 cri.go:89] found id: ""
	I0816 00:37:27.193015   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.193026   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:27.193034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:27.193091   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:27.228786   79191 cri.go:89] found id: ""
	I0816 00:37:27.228828   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.228847   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:27.228858   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:27.228921   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:27.262776   79191 cri.go:89] found id: ""
	I0816 00:37:27.262808   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.262819   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:27.262829   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:27.262844   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:27.276444   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:27.276470   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:27.349918   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:27.349946   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:27.349958   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:27.435030   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:27.435061   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:27.484043   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:27.484069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.038376   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:30.051467   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:30.051530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:30.086346   79191 cri.go:89] found id: ""
	I0816 00:37:30.086376   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.086386   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:30.086394   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:30.086454   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:30.127665   79191 cri.go:89] found id: ""
	I0816 00:37:30.127691   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.127699   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:30.127704   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:30.127757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:30.169901   79191 cri.go:89] found id: ""
	I0816 00:37:30.169929   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.169939   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:30.169950   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:30.170013   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:30.212501   79191 cri.go:89] found id: ""
	I0816 00:37:30.212523   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.212530   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:30.212537   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:30.212584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:30.256560   79191 cri.go:89] found id: ""
	I0816 00:37:30.256583   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.256591   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:30.256597   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:30.256646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:30.291062   79191 cri.go:89] found id: ""
	I0816 00:37:30.291086   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.291093   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:30.291099   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:30.291143   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:30.328325   79191 cri.go:89] found id: ""
	I0816 00:37:30.328353   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.328361   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:30.328368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:30.328415   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:30.364946   79191 cri.go:89] found id: ""
	I0816 00:37:30.364972   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.364981   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:30.364991   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:30.365005   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:30.408090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:30.408117   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.463421   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:30.463456   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:30.479679   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:30.479711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:30.555394   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:30.555416   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:30.555432   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:28.494477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.494598   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.427146   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:32.926282   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:31.877446   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.376145   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:33.137366   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:33.150970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:33.151030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:33.191020   79191 cri.go:89] found id: ""
	I0816 00:37:33.191047   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.191055   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:33.191061   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:33.191112   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:33.227971   79191 cri.go:89] found id: ""
	I0816 00:37:33.228022   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.228030   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:33.228038   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:33.228089   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:33.265036   79191 cri.go:89] found id: ""
	I0816 00:37:33.265065   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.265074   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:33.265079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:33.265126   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:33.300385   79191 cri.go:89] found id: ""
	I0816 00:37:33.300411   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.300418   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:33.300425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:33.300487   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:33.335727   79191 cri.go:89] found id: ""
	I0816 00:37:33.335757   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.335768   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:33.335776   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:33.335839   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:33.373458   79191 cri.go:89] found id: ""
	I0816 00:37:33.373489   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.373500   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:33.373507   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:33.373568   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:33.410380   79191 cri.go:89] found id: ""
	I0816 00:37:33.410404   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.410413   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:33.410420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:33.410480   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:33.451007   79191 cri.go:89] found id: ""
	I0816 00:37:33.451030   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.451040   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:33.451049   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:33.451062   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:33.502215   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:33.502249   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:33.516123   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:33.516152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:33.590898   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:33.590921   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:33.590944   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:33.668404   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:33.668455   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:36.209671   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:36.223498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:36.223561   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:36.258980   79191 cri.go:89] found id: ""
	I0816 00:37:36.259041   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.259056   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:36.259064   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:36.259123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:36.293659   79191 cri.go:89] found id: ""
	I0816 00:37:36.293687   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.293694   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:36.293703   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:36.293761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:36.331729   79191 cri.go:89] found id: ""
	I0816 00:37:36.331756   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.331766   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:36.331773   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:36.331830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:36.368441   79191 cri.go:89] found id: ""
	I0816 00:37:36.368470   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.368479   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:36.368486   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:36.368533   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:36.405338   79191 cri.go:89] found id: ""
	I0816 00:37:36.405368   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.405380   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:36.405389   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:36.405448   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:36.441986   79191 cri.go:89] found id: ""
	I0816 00:37:36.442018   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.442029   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:36.442038   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:36.442097   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:36.478102   79191 cri.go:89] found id: ""
	I0816 00:37:36.478183   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.478197   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:36.478206   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:36.478269   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:36.517138   79191 cri.go:89] found id: ""
	I0816 00:37:36.517167   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.517178   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:36.517190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:36.517205   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:36.570009   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:36.570042   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:36.583534   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:36.583565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:36.651765   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:36.651794   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:36.651808   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:36.732836   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:36.732870   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:32.495090   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.996253   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.926615   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:37.425790   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:36.377305   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:38.876443   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.274490   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:39.288528   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:39.288591   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:39.325560   79191 cri.go:89] found id: ""
	I0816 00:37:39.325582   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.325589   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:39.325599   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:39.325656   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:39.365795   79191 cri.go:89] found id: ""
	I0816 00:37:39.365822   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.365829   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:39.365837   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:39.365906   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:39.404933   79191 cri.go:89] found id: ""
	I0816 00:37:39.404961   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.404971   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:39.404977   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:39.405041   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:39.442712   79191 cri.go:89] found id: ""
	I0816 00:37:39.442736   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.442747   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:39.442754   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:39.442814   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:39.484533   79191 cri.go:89] found id: ""
	I0816 00:37:39.484557   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.484566   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:39.484573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:39.484636   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:39.522089   79191 cri.go:89] found id: ""
	I0816 00:37:39.522115   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.522125   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:39.522133   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:39.522194   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:39.557099   79191 cri.go:89] found id: ""
	I0816 00:37:39.557128   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.557138   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:39.557145   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:39.557205   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:39.594809   79191 cri.go:89] found id: ""
	I0816 00:37:39.594838   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.594849   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:39.594859   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:39.594874   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:39.611079   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:39.611110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:39.683156   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:39.683182   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:39.683198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:39.761198   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:39.761235   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:39.800972   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:39.801003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:37.494553   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.495854   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.427910   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.926445   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.376128   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:43.377791   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:42.354816   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:42.368610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:42.368673   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:42.404716   79191 cri.go:89] found id: ""
	I0816 00:37:42.404738   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.404745   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:42.404753   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:42.404798   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:42.441619   79191 cri.go:89] found id: ""
	I0816 00:37:42.441649   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.441660   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:42.441667   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:42.441726   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:42.480928   79191 cri.go:89] found id: ""
	I0816 00:37:42.480965   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.480976   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:42.480983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:42.481051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:42.519187   79191 cri.go:89] found id: ""
	I0816 00:37:42.519216   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.519226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:42.519234   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:42.519292   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:42.554928   79191 cri.go:89] found id: ""
	I0816 00:37:42.554956   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.554967   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:42.554974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:42.555035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:42.593436   79191 cri.go:89] found id: ""
	I0816 00:37:42.593472   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.593481   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:42.593487   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:42.593545   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:42.628078   79191 cri.go:89] found id: ""
	I0816 00:37:42.628101   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.628108   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:42.628113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:42.628172   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:42.662824   79191 cri.go:89] found id: ""
	I0816 00:37:42.662852   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.662862   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:42.662871   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:42.662888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:42.677267   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:42.677290   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:42.749570   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:42.749599   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:42.749615   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:42.831177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:42.831213   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:42.871928   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:42.871957   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.430704   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:45.444400   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:45.444461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:45.479503   79191 cri.go:89] found id: ""
	I0816 00:37:45.479529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.479537   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:45.479543   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:45.479596   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:45.518877   79191 cri.go:89] found id: ""
	I0816 00:37:45.518907   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.518917   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:45.518925   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:45.518992   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:45.553936   79191 cri.go:89] found id: ""
	I0816 00:37:45.553966   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.553977   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:45.553984   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:45.554035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:45.593054   79191 cri.go:89] found id: ""
	I0816 00:37:45.593081   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.593088   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:45.593095   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:45.593147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:45.631503   79191 cri.go:89] found id: ""
	I0816 00:37:45.631529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.631537   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:45.631543   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:45.631599   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:45.667435   79191 cri.go:89] found id: ""
	I0816 00:37:45.667459   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.667466   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:45.667473   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:45.667529   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:45.702140   79191 cri.go:89] found id: ""
	I0816 00:37:45.702168   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.702179   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:45.702187   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:45.702250   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:45.736015   79191 cri.go:89] found id: ""
	I0816 00:37:45.736048   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.736059   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:45.736070   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:45.736085   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:45.817392   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:45.817427   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:45.856421   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:45.856451   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.912429   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:45.912476   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:45.928411   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:45.928435   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:46.001141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:41.995835   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.497033   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.426414   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:46.927720   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:45.876721   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:47.877185   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.877396   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.501317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:48.515114   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:48.515190   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:48.553776   79191 cri.go:89] found id: ""
	I0816 00:37:48.553802   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.553810   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:48.553816   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:48.553890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:48.589760   79191 cri.go:89] found id: ""
	I0816 00:37:48.589786   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.589794   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:48.589800   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:48.589871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:48.629792   79191 cri.go:89] found id: ""
	I0816 00:37:48.629816   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.629825   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:48.629833   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:48.629898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:48.668824   79191 cri.go:89] found id: ""
	I0816 00:37:48.668852   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.668860   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:48.668866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:48.668930   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:48.704584   79191 cri.go:89] found id: ""
	I0816 00:37:48.704615   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.704626   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:48.704634   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:48.704691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:48.738833   79191 cri.go:89] found id: ""
	I0816 00:37:48.738855   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.738863   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:48.738868   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:48.738928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:48.774943   79191 cri.go:89] found id: ""
	I0816 00:37:48.774972   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.774981   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:48.774989   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:48.775051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:48.808802   79191 cri.go:89] found id: ""
	I0816 00:37:48.808825   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.808832   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:48.808841   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:48.808856   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:48.858849   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:48.858880   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:48.873338   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:48.873369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:48.950172   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:48.950195   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:48.950209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:49.038642   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:49.038679   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:51.581947   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:51.596612   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:51.596691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:51.631468   79191 cri.go:89] found id: ""
	I0816 00:37:51.631498   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.631509   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:51.631517   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:51.631577   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:51.666922   79191 cri.go:89] found id: ""
	I0816 00:37:51.666953   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.666963   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:51.666971   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:51.667034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:51.707081   79191 cri.go:89] found id: ""
	I0816 00:37:51.707109   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.707116   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:51.707122   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:51.707189   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:51.743884   79191 cri.go:89] found id: ""
	I0816 00:37:51.743912   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.743925   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:51.743932   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:51.743990   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:51.779565   79191 cri.go:89] found id: ""
	I0816 00:37:51.779595   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.779603   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:51.779610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:51.779658   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:46.994211   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.995446   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.495519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.426703   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.426947   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:53.427050   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:52.377050   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.877759   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.818800   79191 cri.go:89] found id: ""
	I0816 00:37:51.818824   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.818831   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:51.818837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:51.818899   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:51.855343   79191 cri.go:89] found id: ""
	I0816 00:37:51.855367   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.855374   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:51.855380   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:51.855426   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:51.890463   79191 cri.go:89] found id: ""
	I0816 00:37:51.890496   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.890505   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:51.890513   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:51.890526   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:51.977168   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:51.977209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:52.021626   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:52.021660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:52.076983   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:52.077027   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:52.092111   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:52.092142   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:52.172738   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:54.673192   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:54.688780   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.688853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.725279   79191 cri.go:89] found id: ""
	I0816 00:37:54.725308   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.725318   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:54.725325   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.725383   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:54.764326   79191 cri.go:89] found id: ""
	I0816 00:37:54.764353   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.764364   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:54.764372   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:54.764423   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:54.805221   79191 cri.go:89] found id: ""
	I0816 00:37:54.805252   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.805263   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:54.805270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:54.805334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:54.849724   79191 cri.go:89] found id: ""
	I0816 00:37:54.849750   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.849759   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:54.849765   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:54.849824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:54.894438   79191 cri.go:89] found id: ""
	I0816 00:37:54.894460   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.894468   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:54.894475   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:54.894532   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:54.933400   79191 cri.go:89] found id: ""
	I0816 00:37:54.933422   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.933431   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:54.933439   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:54.933497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:54.982249   79191 cri.go:89] found id: ""
	I0816 00:37:54.982277   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.982286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:54.982294   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:54.982353   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:55.024431   79191 cri.go:89] found id: ""
	I0816 00:37:55.024458   79191 logs.go:276] 0 containers: []
	W0816 00:37:55.024469   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:55.024479   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.024499   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:55.107089   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:55.107119   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:55.148949   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.148981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.202865   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.202902   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.218528   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.218556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:55.304995   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:53.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:55.995483   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.926671   78713 pod_ready.go:82] duration metric: took 4m0.007058537s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:37:54.926700   78713 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:37:54.926711   78713 pod_ready.go:39] duration metric: took 4m7.919515966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:37:54.926728   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:37:54.926764   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.926821   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.983024   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:54.983043   78713 cri.go:89] found id: ""
	I0816 00:37:54.983052   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:54.983103   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:54.988579   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.988644   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:55.035200   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.035231   78713 cri.go:89] found id: ""
	I0816 00:37:55.035241   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:55.035291   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.040701   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:55.040777   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:55.087306   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.087330   78713 cri.go:89] found id: ""
	I0816 00:37:55.087340   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:55.087422   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.092492   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:55.092560   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:55.144398   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.144424   78713 cri.go:89] found id: ""
	I0816 00:37:55.144433   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:55.144494   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.149882   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:55.149953   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:55.193442   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.193464   78713 cri.go:89] found id: ""
	I0816 00:37:55.193472   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:55.193528   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.198812   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:55.198886   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:55.238634   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.238656   78713 cri.go:89] found id: ""
	I0816 00:37:55.238666   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:55.238729   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.243141   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:55.243229   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:55.281414   78713 cri.go:89] found id: ""
	I0816 00:37:55.281439   78713 logs.go:276] 0 containers: []
	W0816 00:37:55.281449   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:55.281457   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:55.281519   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:55.319336   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.319357   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.319363   78713 cri.go:89] found id: ""
	I0816 00:37:55.319371   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:55.319431   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.323837   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.328777   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:55.328801   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.376259   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:55.376290   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.419553   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:55.419584   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.476026   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.476058   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.544263   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.544297   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.561818   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.561858   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:55.701342   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:55.701375   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:55.746935   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:55.746968   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.787200   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:55.787234   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.825257   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:55.825282   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.865569   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:55.865594   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.905234   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.905269   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:56.391175   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:56.391208   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:58.943163   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:58.961551   78713 api_server.go:72] duration metric: took 4m17.689832084s to wait for apiserver process to appear ...
	I0816 00:37:58.961592   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:37:58.961630   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:58.961697   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:59.001773   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.001794   78713 cri.go:89] found id: ""
	I0816 00:37:59.001803   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:59.001876   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.006168   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:59.006222   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:59.041625   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.041647   78713 cri.go:89] found id: ""
	I0816 00:37:59.041654   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:59.041715   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.046258   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:59.046323   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:59.086070   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.086089   78713 cri.go:89] found id: ""
	I0816 00:37:59.086097   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:59.086151   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.090556   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:59.090626   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:59.129889   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.129931   78713 cri.go:89] found id: ""
	I0816 00:37:59.129942   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:59.130008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.135694   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:59.135775   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.375656   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.375979   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:57.805335   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:57.819904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:57.819989   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:57.856119   79191 cri.go:89] found id: ""
	I0816 00:37:57.856146   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.856153   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:57.856160   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:57.856217   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:57.892797   79191 cri.go:89] found id: ""
	I0816 00:37:57.892825   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.892833   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:57.892841   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:57.892905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:57.928753   79191 cri.go:89] found id: ""
	I0816 00:37:57.928784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.928795   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:57.928803   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:57.928884   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:57.963432   79191 cri.go:89] found id: ""
	I0816 00:37:57.963462   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.963474   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:57.963481   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:57.963538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.998759   79191 cri.go:89] found id: ""
	I0816 00:37:57.998784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.998793   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:57.998801   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:57.998886   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:58.035262   79191 cri.go:89] found id: ""
	I0816 00:37:58.035288   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.035296   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:58.035303   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:58.035358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:58.071052   79191 cri.go:89] found id: ""
	I0816 00:37:58.071079   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.071087   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:58.071092   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:58.071150   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:58.110047   79191 cri.go:89] found id: ""
	I0816 00:37:58.110074   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.110083   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:58.110090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:58.110101   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:58.164792   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:58.164823   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:58.178742   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:58.178770   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:58.251861   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:58.251899   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:58.251921   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:58.329805   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:58.329859   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:00.872911   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:00.887914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:00.887986   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:00.925562   79191 cri.go:89] found id: ""
	I0816 00:38:00.925595   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.925606   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:00.925615   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:00.925669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:00.961476   79191 cri.go:89] found id: ""
	I0816 00:38:00.961498   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.961505   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:00.961510   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:00.961554   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:00.997575   79191 cri.go:89] found id: ""
	I0816 00:38:00.997599   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.997608   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:00.997616   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:00.997677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:01.035130   79191 cri.go:89] found id: ""
	I0816 00:38:01.035158   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.035169   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:01.035177   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:01.035232   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:01.073768   79191 cri.go:89] found id: ""
	I0816 00:38:01.073800   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.073811   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:01.073819   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:01.073898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:01.107904   79191 cri.go:89] found id: ""
	I0816 00:38:01.107928   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.107937   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:01.107943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:01.108004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:01.142654   79191 cri.go:89] found id: ""
	I0816 00:38:01.142690   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.142701   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:01.142709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:01.142766   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:01.187565   79191 cri.go:89] found id: ""
	I0816 00:38:01.187599   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.187610   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:01.187621   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:01.187635   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:01.265462   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:01.265493   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:01.265508   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:01.346988   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:01.347020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:01.390977   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:01.391006   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.443858   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:01.443892   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:57.996188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:00.495210   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.176702   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.176728   78713 cri.go:89] found id: ""
	I0816 00:37:59.176738   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:59.176799   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.182305   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:59.182387   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:59.223938   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.223960   78713 cri.go:89] found id: ""
	I0816 00:37:59.223968   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:59.224023   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.228818   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:59.228884   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:59.264566   78713 cri.go:89] found id: ""
	I0816 00:37:59.264589   78713 logs.go:276] 0 containers: []
	W0816 00:37:59.264597   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:59.264606   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:59.264654   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:59.302534   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.302560   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.302565   78713 cri.go:89] found id: ""
	I0816 00:37:59.302574   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:59.302621   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.307021   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.311258   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:59.311299   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:59.425542   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:59.425574   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.466078   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:59.466107   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:59.480894   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:59.480925   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.524790   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:59.524822   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.568832   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:59.568862   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.619399   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:59.619433   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.658616   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:59.658645   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.720421   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:59.720469   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.756558   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:59.756586   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.798650   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:59.798674   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:59.864280   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:59.864323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:59.913086   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:59.913118   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:02.828194   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:38:02.832896   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:38:02.834035   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:02.834059   78713 api_server.go:131] duration metric: took 3.87246001s to wait for apiserver health ...
	I0816 00:38:02.834067   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:02.834089   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:02.834145   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:02.873489   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:02.873512   78713 cri.go:89] found id: ""
	I0816 00:38:02.873521   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:38:02.873577   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.878807   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:02.878883   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:02.919930   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:02.919949   78713 cri.go:89] found id: ""
	I0816 00:38:02.919957   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:38:02.920008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.924459   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:02.924525   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:02.964609   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:02.964636   78713 cri.go:89] found id: ""
	I0816 00:38:02.964644   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:38:02.964697   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.968808   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:02.968921   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:03.017177   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.017201   78713 cri.go:89] found id: ""
	I0816 00:38:03.017210   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:38:03.017275   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.021905   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:03.021992   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:03.061720   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.061741   78713 cri.go:89] found id: ""
	I0816 00:38:03.061748   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:38:03.061801   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.066149   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:03.066206   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:03.107130   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.107149   78713 cri.go:89] found id: ""
	I0816 00:38:03.107156   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:38:03.107213   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.111323   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:03.111372   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:03.149906   78713 cri.go:89] found id: ""
	I0816 00:38:03.149927   78713 logs.go:276] 0 containers: []
	W0816 00:38:03.149934   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:03.149940   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:03.150000   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:03.190981   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.191007   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.191011   78713 cri.go:89] found id: ""
	I0816 00:38:03.191018   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:38:03.191066   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.195733   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.199755   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:03.199775   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:03.302209   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:38:03.302239   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:03.352505   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:38:03.352548   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.392296   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:38:03.392323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.448092   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:38:03.448130   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.487516   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:38:03.487541   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:03.541954   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:03.541989   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:03.557026   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:38:03.557049   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:03.602639   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:38:03.602670   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:03.642706   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:38:03.642733   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.683504   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:38:03.683530   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.721802   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:03.721826   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.089579   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.089621   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.376613   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:03.376837   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:06.679744   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:06.679797   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.679805   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.679812   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.679819   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.679825   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.679849   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.679861   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.679869   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.679878   78713 system_pods.go:74] duration metric: took 3.845804999s to wait for pod list to return data ...
	I0816 00:38:06.679886   78713 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:06.682521   78713 default_sa.go:45] found service account: "default"
	I0816 00:38:06.682553   78713 default_sa.go:55] duration metric: took 2.660224ms for default service account to be created ...
	I0816 00:38:06.682565   78713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:06.688149   78713 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:06.688178   78713 system_pods.go:89] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.688183   78713 system_pods.go:89] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.688187   78713 system_pods.go:89] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.688192   78713 system_pods.go:89] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.688196   78713 system_pods.go:89] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.688199   78713 system_pods.go:89] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.688206   78713 system_pods.go:89] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.688213   78713 system_pods.go:89] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.688220   78713 system_pods.go:126] duration metric: took 5.649758ms to wait for k8s-apps to be running ...
	I0816 00:38:06.688226   78713 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:06.688268   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:06.706263   78713 system_svc.go:56] duration metric: took 18.025675ms WaitForService to wait for kubelet
	I0816 00:38:06.706301   78713 kubeadm.go:582] duration metric: took 4m25.434584326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:06.706337   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:06.709536   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:06.709553   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:06.709565   78713 node_conditions.go:105] duration metric: took 3.213145ms to run NodePressure ...
	I0816 00:38:06.709576   78713 start.go:241] waiting for startup goroutines ...
	I0816 00:38:06.709582   78713 start.go:246] waiting for cluster config update ...
	I0816 00:38:06.709593   78713 start.go:255] writing updated cluster config ...
	I0816 00:38:06.709864   78713 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:06.755974   78713 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:06.757917   78713 out.go:177] * Done! kubectl is now configured to use "embed-certs-758469" cluster and "default" namespace by default
	I0816 00:38:03.959040   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:03.973674   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:03.973758   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:04.013606   79191 cri.go:89] found id: ""
	I0816 00:38:04.013653   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.013661   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:04.013667   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:04.013737   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:04.054558   79191 cri.go:89] found id: ""
	I0816 00:38:04.054590   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.054602   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:04.054609   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:04.054667   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:04.097116   79191 cri.go:89] found id: ""
	I0816 00:38:04.097143   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.097154   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:04.097162   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:04.097223   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:04.136770   79191 cri.go:89] found id: ""
	I0816 00:38:04.136798   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.136809   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:04.136816   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:04.136865   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:04.171906   79191 cri.go:89] found id: ""
	I0816 00:38:04.171929   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.171937   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:04.171943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:04.172004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:04.208694   79191 cri.go:89] found id: ""
	I0816 00:38:04.208725   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.208735   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:04.208744   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:04.208803   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:04.276713   79191 cri.go:89] found id: ""
	I0816 00:38:04.276744   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.276755   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:04.276763   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:04.276823   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:04.316646   79191 cri.go:89] found id: ""
	I0816 00:38:04.316669   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.316696   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:04.316707   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:04.316722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:04.329819   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:04.329864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:04.399032   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:04.399052   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:04.399080   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.487665   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:04.487698   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:04.530937   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.530962   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:02.496317   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:04.496477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:05.878535   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:08.377096   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:07.087584   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:07.102015   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:07.102086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:07.139530   79191 cri.go:89] found id: ""
	I0816 00:38:07.139559   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.139569   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:07.139577   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:07.139642   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:07.179630   79191 cri.go:89] found id: ""
	I0816 00:38:07.179659   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.179669   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:07.179675   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:07.179734   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:07.216407   79191 cri.go:89] found id: ""
	I0816 00:38:07.216435   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.216444   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:07.216449   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:07.216509   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:07.252511   79191 cri.go:89] found id: ""
	I0816 00:38:07.252536   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.252544   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:07.252551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:07.252613   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:07.288651   79191 cri.go:89] found id: ""
	I0816 00:38:07.288679   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.288689   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:07.288698   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:07.288757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:07.325910   79191 cri.go:89] found id: ""
	I0816 00:38:07.325963   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.325974   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:07.325982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:07.326046   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:07.362202   79191 cri.go:89] found id: ""
	I0816 00:38:07.362230   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.362244   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:07.362251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:07.362316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:07.405272   79191 cri.go:89] found id: ""
	I0816 00:38:07.405302   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.405313   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:07.405324   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:07.405339   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:07.461186   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:07.461222   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:07.475503   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:07.475544   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:07.555146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:07.555165   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:07.555179   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:07.635162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:07.635201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.174600   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:10.190418   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:10.190479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:10.251925   79191 cri.go:89] found id: ""
	I0816 00:38:10.251960   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.251969   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:10.251974   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:10.252027   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:10.289038   79191 cri.go:89] found id: ""
	I0816 00:38:10.289078   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.289088   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:10.289096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:10.289153   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:10.334562   79191 cri.go:89] found id: ""
	I0816 00:38:10.334591   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.334601   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:10.334609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:10.334669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:10.371971   79191 cri.go:89] found id: ""
	I0816 00:38:10.372000   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.372010   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:10.372018   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:10.372084   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:10.409654   79191 cri.go:89] found id: ""
	I0816 00:38:10.409685   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.409696   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:10.409703   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:10.409770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:10.446639   79191 cri.go:89] found id: ""
	I0816 00:38:10.446666   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.446675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:10.446683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:10.446750   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:10.483601   79191 cri.go:89] found id: ""
	I0816 00:38:10.483629   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.483641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:10.483648   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:10.483707   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:10.519640   79191 cri.go:89] found id: ""
	I0816 00:38:10.519670   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.519679   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:10.519690   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:10.519704   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:10.603281   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:10.603300   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:10.603311   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:10.689162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:10.689198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.730701   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:10.730724   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:10.780411   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:10.780441   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:06.997726   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:09.495539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.495753   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:10.876242   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.376332   78747 pod_ready.go:82] duration metric: took 4m0.006460655s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:38:11.376362   78747 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:38:11.376372   78747 pod_ready.go:39] duration metric: took 4m3.906659924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:38:11.376389   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:38:11.376416   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:11.376472   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:11.425716   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:11.425741   78747 cri.go:89] found id: ""
	I0816 00:38:11.425749   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:11.425804   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.431122   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:11.431195   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:11.468622   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:11.468647   78747 cri.go:89] found id: ""
	I0816 00:38:11.468657   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:11.468713   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.474270   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:11.474329   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:11.518448   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:11.518493   78747 cri.go:89] found id: ""
	I0816 00:38:11.518502   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:11.518569   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.524185   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:11.524242   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:11.561343   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:11.561367   78747 cri.go:89] found id: ""
	I0816 00:38:11.561374   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:11.561418   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.565918   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:11.565992   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:11.606010   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.606036   78747 cri.go:89] found id: ""
	I0816 00:38:11.606043   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:11.606097   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.610096   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:11.610166   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:11.646204   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:11.646229   78747 cri.go:89] found id: ""
	I0816 00:38:11.646238   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:11.646295   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.650405   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:11.650467   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:11.690407   78747 cri.go:89] found id: ""
	I0816 00:38:11.690436   78747 logs.go:276] 0 containers: []
	W0816 00:38:11.690446   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:11.690454   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:11.690510   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:11.736695   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:11.736722   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:11.736729   78747 cri.go:89] found id: ""
	I0816 00:38:11.736738   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:11.736803   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.741022   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.744983   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:11.745011   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.791452   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:11.791484   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:12.304425   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:12.304470   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:12.341318   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:12.341353   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:12.401425   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:12.401464   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:12.476598   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:12.476653   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:12.495594   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:12.495629   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:12.645961   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:12.645991   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:12.697058   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:12.697091   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:12.749085   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:12.749117   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:12.795786   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:12.795831   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:12.835928   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:12.835959   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:12.872495   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:12.872524   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:13.294689   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:13.308762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:13.308822   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:13.345973   79191 cri.go:89] found id: ""
	I0816 00:38:13.346004   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.346015   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:13.346022   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:13.346083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:13.382905   79191 cri.go:89] found id: ""
	I0816 00:38:13.382934   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.382945   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:13.382952   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:13.383001   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:13.417616   79191 cri.go:89] found id: ""
	I0816 00:38:13.417650   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.417662   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:13.417669   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:13.417739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:13.453314   79191 cri.go:89] found id: ""
	I0816 00:38:13.453350   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.453360   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:13.453368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:13.453435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:13.488507   79191 cri.go:89] found id: ""
	I0816 00:38:13.488536   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.488547   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:13.488555   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:13.488614   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:13.527064   79191 cri.go:89] found id: ""
	I0816 00:38:13.527095   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.527108   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:13.527116   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:13.527178   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:13.562838   79191 cri.go:89] found id: ""
	I0816 00:38:13.562867   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.562876   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:13.562882   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:13.562944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:13.598924   79191 cri.go:89] found id: ""
	I0816 00:38:13.598963   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.598974   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:13.598985   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:13.598999   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:13.651122   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:13.651156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:13.665255   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:13.665281   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:13.742117   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:13.742135   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:13.742148   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:13.824685   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:13.824719   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.366542   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:16.380855   79191 kubeadm.go:597] duration metric: took 4m3.665876253s to restartPrimaryControlPlane
	W0816 00:38:16.380919   79191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:38:16.380946   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:38:13.496702   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.996304   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.421355   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:15.437651   78747 api_server.go:72] duration metric: took 4m15.224557183s to wait for apiserver process to appear ...
	I0816 00:38:15.437677   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:38:15.437721   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:15.437782   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:15.473240   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:15.473265   78747 cri.go:89] found id: ""
	I0816 00:38:15.473273   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:15.473335   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.477666   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:15.477734   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:15.526073   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:15.526095   78747 cri.go:89] found id: ""
	I0816 00:38:15.526104   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:15.526165   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.530706   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:15.530775   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:15.571124   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:15.571149   78747 cri.go:89] found id: ""
	I0816 00:38:15.571159   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:15.571217   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.578613   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:15.578690   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:15.617432   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:15.617454   78747 cri.go:89] found id: ""
	I0816 00:38:15.617464   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:15.617529   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.621818   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:15.621899   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:15.658963   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:15.658981   78747 cri.go:89] found id: ""
	I0816 00:38:15.658988   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:15.659037   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.663170   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:15.663230   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:15.699297   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.699322   78747 cri.go:89] found id: ""
	I0816 00:38:15.699331   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:15.699388   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.704029   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:15.704085   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:15.742790   78747 cri.go:89] found id: ""
	I0816 00:38:15.742816   78747 logs.go:276] 0 containers: []
	W0816 00:38:15.742825   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:15.742830   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:15.742875   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:15.776898   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:15.776918   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:15.776922   78747 cri.go:89] found id: ""
	I0816 00:38:15.776945   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:15.777007   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.781511   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.785953   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:15.785981   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.840461   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:15.840498   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:16.320285   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:16.320323   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.362171   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:16.362200   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:16.444803   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:16.444834   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:16.461705   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:16.461732   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:16.576190   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:16.576220   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:16.626407   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:16.626449   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:16.673004   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:16.673036   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:16.724770   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:16.724797   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:16.764812   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:16.764838   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:16.804268   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:16.804300   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:16.841197   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:16.841221   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.380352   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:38:19.386760   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:38:19.387751   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:19.387773   78747 api_server.go:131] duration metric: took 3.950088801s to wait for apiserver health ...
	I0816 00:38:19.387781   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:19.387801   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:19.387843   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:19.429928   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:19.429952   78747 cri.go:89] found id: ""
	I0816 00:38:19.429961   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:19.430021   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.434822   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:19.434870   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:19.476789   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:19.476811   78747 cri.go:89] found id: ""
	I0816 00:38:19.476819   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:19.476869   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.481574   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:19.481640   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:19.528718   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:19.528742   78747 cri.go:89] found id: ""
	I0816 00:38:19.528750   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:19.528799   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.533391   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:19.533455   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:19.581356   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:19.581374   78747 cri.go:89] found id: ""
	I0816 00:38:19.581381   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:19.581427   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.585915   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:19.585977   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:19.623514   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:19.623544   78747 cri.go:89] found id: ""
	I0816 00:38:19.623552   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:19.623606   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.627652   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:19.627711   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:19.663933   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:19.663957   78747 cri.go:89] found id: ""
	I0816 00:38:19.663967   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:19.664032   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.668093   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:19.668162   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:19.707688   78747 cri.go:89] found id: ""
	I0816 00:38:19.707716   78747 logs.go:276] 0 containers: []
	W0816 00:38:19.707726   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:19.707741   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:19.707804   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:19.745900   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:19.745930   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.745935   78747 cri.go:89] found id: ""
	I0816 00:38:19.745944   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:19.746002   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.750934   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.755022   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:19.755044   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:19.807228   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:19.807257   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:19.918242   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:19.918274   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:21.772367   79191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.39139467s)
	I0816 00:38:21.772449   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:18.495150   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:20.995073   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:19.969165   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:19.969198   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:20.008945   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:20.008975   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:20.050080   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:20.050120   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:20.450059   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:20.450107   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:20.490694   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:20.490721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:20.532856   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:20.532890   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:20.609130   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:20.609178   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:20.624248   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:20.624279   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:20.675636   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:20.675669   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:20.716694   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:20.716721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:23.289748   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:23.289773   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.289778   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.289782   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.289786   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.289789   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.289792   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.289799   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.289814   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.289827   78747 system_pods.go:74] duration metric: took 3.902040304s to wait for pod list to return data ...
	I0816 00:38:23.289836   78747 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:23.293498   78747 default_sa.go:45] found service account: "default"
	I0816 00:38:23.293528   78747 default_sa.go:55] duration metric: took 3.671585ms for default service account to be created ...
	I0816 00:38:23.293539   78747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:23.298509   78747 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:23.298534   78747 system_pods.go:89] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.298540   78747 system_pods.go:89] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.298545   78747 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.298549   78747 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.298552   78747 system_pods.go:89] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.298556   78747 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.298561   78747 system_pods.go:89] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.298567   78747 system_pods.go:89] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.298576   78747 system_pods.go:126] duration metric: took 5.030455ms to wait for k8s-apps to be running ...
	I0816 00:38:23.298585   78747 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:23.298632   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:23.318383   78747 system_svc.go:56] duration metric: took 19.787836ms WaitForService to wait for kubelet
	I0816 00:38:23.318419   78747 kubeadm.go:582] duration metric: took 4m23.105331758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:23.318446   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:23.322398   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:23.322425   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:23.322436   78747 node_conditions.go:105] duration metric: took 3.985107ms to run NodePressure ...
	I0816 00:38:23.322447   78747 start.go:241] waiting for startup goroutines ...
	I0816 00:38:23.322454   78747 start.go:246] waiting for cluster config update ...
	I0816 00:38:23.322464   78747 start.go:255] writing updated cluster config ...
	I0816 00:38:23.322801   78747 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:23.374057   78747 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:23.376186   78747 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-616827" cluster and "default" namespace by default
	I0816 00:38:21.788969   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:38:21.800050   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:38:21.811193   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:38:21.811216   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:38:21.811260   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:38:21.821328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:38:21.821391   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:38:21.831777   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:38:21.841357   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:38:21.841424   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:38:21.851564   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.861262   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:38:21.861322   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.871929   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:38:21.881544   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:38:21.881595   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:38:21.891725   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:38:22.120640   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:38:22.997351   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:25.494851   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:27.494976   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:29.495248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:31.994586   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:33.995565   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:36.494547   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:38.495194   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:40.995653   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:42.996593   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:45.495409   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:47.496072   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:49.997645   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:52.496097   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:54.994390   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:56.995869   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:58.996230   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:01.495217   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:02.989403   78489 pod_ready.go:82] duration metric: took 4m0.001106911s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	E0816 00:39:02.989435   78489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 00:39:02.989456   78489 pod_ready.go:39] duration metric: took 4m14.547419665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:02.989488   78489 kubeadm.go:597] duration metric: took 4m21.799297957s to restartPrimaryControlPlane
	W0816 00:39:02.989550   78489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:39:02.989582   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:39:29.166109   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.176504479s)
	I0816 00:39:29.166193   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:29.188082   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:39:29.207577   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:39:29.230485   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:39:29.230510   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:39:29.230564   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:39:29.242106   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:39:29.242177   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:39:29.258756   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:39:29.272824   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:39:29.272896   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:39:29.285574   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.294909   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:39:29.294985   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.304843   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:39:29.315125   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:39:29.315173   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:39:29.325422   78489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:39:29.375775   78489 kubeadm.go:310] W0816 00:39:29.358885    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.376658   78489 kubeadm.go:310] W0816 00:39:29.359753    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.504337   78489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:39:38.219769   78489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 00:39:38.219865   78489 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:39:38.219968   78489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:39:38.220094   78489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:39:38.220215   78489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 00:39:38.220302   78489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:39:38.221971   78489 out.go:235]   - Generating certificates and keys ...
	I0816 00:39:38.222037   78489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:39:38.222119   78489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:39:38.222234   78489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:39:38.222316   78489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:39:38.222430   78489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:39:38.222509   78489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:39:38.222584   78489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:39:38.222684   78489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:39:38.222767   78489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:39:38.222831   78489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:39:38.222862   78489 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:39:38.222943   78489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:39:38.223035   78489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:39:38.223121   78489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 00:39:38.223212   78489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:39:38.223299   78489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:39:38.223355   78489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:39:38.223452   78489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:39:38.223534   78489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:39:38.225012   78489 out.go:235]   - Booting up control plane ...
	I0816 00:39:38.225086   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:39:38.225153   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:39:38.225211   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:39:38.225296   78489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:39:38.225366   78489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:39:38.225399   78489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:39:38.225542   78489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 00:39:38.225706   78489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 00:39:38.225803   78489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001324649s
	I0816 00:39:38.225917   78489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 00:39:38.226004   78489 kubeadm.go:310] [api-check] The API server is healthy after 5.001672205s
	I0816 00:39:38.226125   78489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 00:39:38.226267   78489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 00:39:38.226352   78489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 00:39:38.226537   78489 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-819398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 00:39:38.226620   78489 kubeadm.go:310] [bootstrap-token] Using token: 4qqrpj.xeaneqftblh8gcp3
	I0816 00:39:38.227962   78489 out.go:235]   - Configuring RBAC rules ...
	I0816 00:39:38.228060   78489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 00:39:38.228140   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 00:39:38.228290   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 00:39:38.228437   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 00:39:38.228558   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 00:39:38.228697   78489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 00:39:38.228877   78489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 00:39:38.228942   78489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 00:39:38.229000   78489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 00:39:38.229010   78489 kubeadm.go:310] 
	I0816 00:39:38.229086   78489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 00:39:38.229096   78489 kubeadm.go:310] 
	I0816 00:39:38.229160   78489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 00:39:38.229166   78489 kubeadm.go:310] 
	I0816 00:39:38.229186   78489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 00:39:38.229252   78489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 00:39:38.229306   78489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 00:39:38.229312   78489 kubeadm.go:310] 
	I0816 00:39:38.229361   78489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 00:39:38.229367   78489 kubeadm.go:310] 
	I0816 00:39:38.229403   78489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 00:39:38.229408   78489 kubeadm.go:310] 
	I0816 00:39:38.229447   78489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 00:39:38.229504   78489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 00:39:38.229562   78489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 00:39:38.229567   78489 kubeadm.go:310] 
	I0816 00:39:38.229636   78489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 00:39:38.229701   78489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 00:39:38.229707   78489 kubeadm.go:310] 
	I0816 00:39:38.229793   78489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.229925   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0816 00:39:38.229954   78489 kubeadm.go:310] 	--control-plane 
	I0816 00:39:38.229960   78489 kubeadm.go:310] 
	I0816 00:39:38.230029   78489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 00:39:38.230038   78489 kubeadm.go:310] 
	I0816 00:39:38.230109   78489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.230211   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0816 00:39:38.230223   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:39:38.230232   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:39:38.231742   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:39:38.233079   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:39:38.245435   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:39:38.269502   78489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:39:38.269566   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.269593   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-819398 minikube.k8s.io/updated_at=2024_08_16T00_39_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=no-preload-819398 minikube.k8s.io/primary=true
	I0816 00:39:38.304272   78489 ops.go:34] apiserver oom_adj: -16
	I0816 00:39:38.485643   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.986569   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.486177   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.985737   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.486311   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.985981   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.486071   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.986414   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.486292   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.603092   78489 kubeadm.go:1113] duration metric: took 4.333590575s to wait for elevateKubeSystemPrivileges
	I0816 00:39:42.603133   78489 kubeadm.go:394] duration metric: took 5m1.4690157s to StartCluster
	I0816 00:39:42.603158   78489 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.603258   78489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:39:42.604833   78489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.605072   78489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:39:42.605133   78489 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:39:42.605219   78489 addons.go:69] Setting storage-provisioner=true in profile "no-preload-819398"
	I0816 00:39:42.605254   78489 addons.go:234] Setting addon storage-provisioner=true in "no-preload-819398"
	I0816 00:39:42.605251   78489 addons.go:69] Setting default-storageclass=true in profile "no-preload-819398"
	I0816 00:39:42.605259   78489 addons.go:69] Setting metrics-server=true in profile "no-preload-819398"
	I0816 00:39:42.605295   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:39:42.605308   78489 addons.go:234] Setting addon metrics-server=true in "no-preload-819398"
	I0816 00:39:42.605309   78489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-819398"
	W0816 00:39:42.605320   78489 addons.go:243] addon metrics-server should already be in state true
	W0816 00:39:42.605266   78489 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:39:42.605355   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605370   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605697   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605717   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605731   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605735   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605777   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605837   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.606458   78489 out.go:177] * Verifying Kubernetes components...
	I0816 00:39:42.607740   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:39:42.622512   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0816 00:39:42.623130   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.623697   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.623720   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.624070   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.624666   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.624695   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.626221   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0816 00:39:42.626220   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0816 00:39:42.626608   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.626695   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.627158   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627179   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627329   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627346   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627490   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.627696   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.628049   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.628165   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.628189   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.632500   78489 addons.go:234] Setting addon default-storageclass=true in "no-preload-819398"
	W0816 00:39:42.632523   78489 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:39:42.632554   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.632897   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.632928   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.644779   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0816 00:39:42.645422   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.645995   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.646026   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.646395   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.646607   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.646960   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0816 00:39:42.647374   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.648126   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.648141   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.648471   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.649494   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.649732   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.651509   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.651600   78489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:39:42.652823   78489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:39:42.652936   78489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:42.652951   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:39:42.652970   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654197   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:39:42.654217   78489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:39:42.654234   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654380   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0816 00:39:42.654812   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.655316   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.655332   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.655784   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.656330   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.656356   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.659148   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659319   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659629   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659648   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659776   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659794   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659959   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660138   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660164   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660330   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660444   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660478   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660587   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.660583   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.674431   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0816 00:39:42.674827   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.675399   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.675420   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.675756   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.675993   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.677956   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.678195   78489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:42.678211   78489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:39:42.678230   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.681163   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681593   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.681615   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681916   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.682099   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.682197   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.682276   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.822056   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:39:42.840356   78489 node_ready.go:35] waiting up to 6m0s for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852864   78489 node_ready.go:49] node "no-preload-819398" has status "Ready":"True"
	I0816 00:39:42.852887   78489 node_ready.go:38] duration metric: took 12.497677ms for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852899   78489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:42.866637   78489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:42.908814   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:39:42.908832   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:39:42.949047   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:39:42.949070   78489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:39:42.959159   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:43.021536   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.021557   78489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:39:43.068214   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:43.082144   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.243834   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.243857   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244177   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244192   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.244201   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.244212   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244451   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244505   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.250358   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.250376   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.250608   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:43.250648   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.250656   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419115   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.350866587s)
	I0816 00:39:44.419166   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419175   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419519   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419545   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419542   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419561   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419573   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419824   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419836   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419851   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.436623   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.354435707s)
	I0816 00:39:44.436682   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.436697   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437131   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437150   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437160   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.437169   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437207   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.437495   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437517   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437528   78489 addons.go:475] Verifying addon metrics-server=true in "no-preload-819398"
	I0816 00:39:44.439622   78489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 00:39:44.441097   78489 addons.go:510] duration metric: took 1.835961958s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 00:39:44.878479   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:47.373009   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:49.380832   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:50.372883   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.372919   78489 pod_ready.go:82] duration metric: took 7.506242182s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.372933   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378463   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.378486   78489 pod_ready.go:82] duration metric: took 5.546402ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378496   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383347   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.383364   78489 pod_ready.go:82] duration metric: took 4.862995ms for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383374   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387672   78489 pod_ready.go:93] pod "kube-proxy-nl7g6" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.387693   78489 pod_ready.go:82] duration metric: took 4.312811ms for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387703   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391921   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.391939   78489 pod_ready.go:82] duration metric: took 4.229092ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391945   78489 pod_ready.go:39] duration metric: took 7.539034647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:50.391958   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:39:50.392005   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:39:50.407980   78489 api_server.go:72] duration metric: took 7.802877941s to wait for apiserver process to appear ...
	I0816 00:39:50.408017   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:39:50.408039   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:39:50.412234   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:39:50.413278   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:39:50.413297   78489 api_server.go:131] duration metric: took 5.273051ms to wait for apiserver health ...
	I0816 00:39:50.413304   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:39:50.573185   78489 system_pods.go:59] 9 kube-system pods found
	I0816 00:39:50.573226   78489 system_pods.go:61] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.573233   78489 system_pods.go:61] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.573239   78489 system_pods.go:61] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.573244   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.573250   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.573257   78489 system_pods.go:61] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.573262   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.573271   78489 system_pods.go:61] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.573278   78489 system_pods.go:61] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.573288   78489 system_pods.go:74] duration metric: took 159.97729ms to wait for pod list to return data ...
	I0816 00:39:50.573301   78489 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:39:50.771164   78489 default_sa.go:45] found service account: "default"
	I0816 00:39:50.771189   78489 default_sa.go:55] duration metric: took 197.881739ms for default service account to be created ...
	I0816 00:39:50.771198   78489 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:39:50.973415   78489 system_pods.go:86] 9 kube-system pods found
	I0816 00:39:50.973448   78489 system_pods.go:89] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.973453   78489 system_pods.go:89] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.973457   78489 system_pods.go:89] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.973461   78489 system_pods.go:89] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.973465   78489 system_pods.go:89] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.973468   78489 system_pods.go:89] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.973471   78489 system_pods.go:89] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.973477   78489 system_pods.go:89] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.973482   78489 system_pods.go:89] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.973491   78489 system_pods.go:126] duration metric: took 202.288008ms to wait for k8s-apps to be running ...
	I0816 00:39:50.973498   78489 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:39:50.973539   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:50.989562   78489 system_svc.go:56] duration metric: took 16.053781ms WaitForService to wait for kubelet
	I0816 00:39:50.989595   78489 kubeadm.go:582] duration metric: took 8.384495377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:39:50.989618   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:39:51.171076   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:39:51.171109   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:39:51.171120   78489 node_conditions.go:105] duration metric: took 181.496732ms to run NodePressure ...
	I0816 00:39:51.171134   78489 start.go:241] waiting for startup goroutines ...
	I0816 00:39:51.171144   78489 start.go:246] waiting for cluster config update ...
	I0816 00:39:51.171157   78489 start.go:255] writing updated cluster config ...
	I0816 00:39:51.171465   78489 ssh_runner.go:195] Run: rm -f paused
	I0816 00:39:51.220535   78489 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:39:51.223233   78489 out.go:177] * Done! kubectl is now configured to use "no-preload-819398" cluster and "default" namespace by default
	I0816 00:40:18.143220   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:40:18.143333   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:40:18.144757   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.144804   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.144888   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.145018   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.145134   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:18.145210   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:18.146791   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:18.146879   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:18.146965   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:18.147072   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:18.147164   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:18.147258   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:18.147340   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:18.147434   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:18.147525   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:18.147613   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:18.147708   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:18.147744   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:18.147791   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:18.147839   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:18.147916   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:18.147989   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:18.148045   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:18.148194   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:18.148318   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:18.148365   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:18.148458   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:18.149817   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:18.149941   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:18.150044   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:18.150107   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:18.150187   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:18.150323   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:40:18.150380   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:40:18.150460   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150671   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.150766   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150953   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151033   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151232   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151305   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151520   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151614   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151840   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151856   79191 kubeadm.go:310] 
	I0816 00:40:18.151917   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:40:18.151978   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:40:18.151992   79191 kubeadm.go:310] 
	I0816 00:40:18.152046   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:40:18.152097   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:40:18.152204   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:40:18.152218   79191 kubeadm.go:310] 
	I0816 00:40:18.152314   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:40:18.152349   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:40:18.152377   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:40:18.152384   79191 kubeadm.go:310] 
	I0816 00:40:18.152466   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:40:18.152537   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:40:18.152543   79191 kubeadm.go:310] 
	I0816 00:40:18.152674   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:40:18.152769   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:40:18.152853   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:40:18.152914   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:40:18.152978   79191 kubeadm.go:310] 
	W0816 00:40:18.153019   79191 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:40:18.153055   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:40:18.634058   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:40:18.648776   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:40:18.659504   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:40:18.659529   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:40:18.659584   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:40:18.670234   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:40:18.670285   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:40:18.680370   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:40:18.689496   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:40:18.689557   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:40:18.698949   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.708056   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:40:18.708118   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.718261   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:40:18.728708   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:40:18.728777   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:40:18.739253   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:40:18.819666   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.819746   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.966568   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.966704   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.966868   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:19.168323   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:19.170213   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:19.170335   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:19.170464   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:19.170546   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:19.170598   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:19.170670   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:19.170740   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:19.170828   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:19.170924   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:19.171031   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:19.171129   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:19.171179   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:19.171261   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:19.421256   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:19.585260   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:19.672935   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:19.928620   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:19.952420   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:19.953527   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:19.953578   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:20.090384   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:20.092904   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:20.093037   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:20.105743   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:20.106980   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:20.108199   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:20.111014   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:41:00.113053   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:41:00.113479   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:00.113752   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:05.113795   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:05.114091   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:15.114695   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:15.114932   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:35.116019   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:35.116207   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.116728   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:42:15.116994   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.117018   79191 kubeadm.go:310] 
	I0816 00:42:15.117071   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:42:15.117136   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:42:15.117147   79191 kubeadm.go:310] 
	I0816 00:42:15.117198   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:42:15.117248   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:42:15.117402   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:42:15.117412   79191 kubeadm.go:310] 
	I0816 00:42:15.117543   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:42:15.117601   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:42:15.117636   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:42:15.117644   79191 kubeadm.go:310] 
	I0816 00:42:15.117778   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:42:15.117918   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:42:15.117929   79191 kubeadm.go:310] 
	I0816 00:42:15.118083   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:42:15.118215   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:42:15.118313   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:42:15.118412   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:42:15.118433   79191 kubeadm.go:310] 
	I0816 00:42:15.118582   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:42:15.118698   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:42:15.118843   79191 kubeadm.go:394] duration metric: took 8m2.460648867s to StartCluster
	I0816 00:42:15.118855   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:42:15.118891   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:42:15.118957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:42:15.162809   79191 cri.go:89] found id: ""
	I0816 00:42:15.162837   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.162848   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:42:15.162855   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:42:15.162925   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:42:15.198020   79191 cri.go:89] found id: ""
	I0816 00:42:15.198042   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.198053   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:42:15.198063   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:42:15.198132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:42:15.238168   79191 cri.go:89] found id: ""
	I0816 00:42:15.238197   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.238206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:42:15.238213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:42:15.238273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:42:15.278364   79191 cri.go:89] found id: ""
	I0816 00:42:15.278391   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.278401   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:42:15.278407   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:42:15.278465   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:42:15.316182   79191 cri.go:89] found id: ""
	I0816 00:42:15.316209   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.316216   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:42:15.316222   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:42:15.316278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:42:15.352934   79191 cri.go:89] found id: ""
	I0816 00:42:15.352962   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.352970   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:42:15.352976   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:42:15.353031   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:42:15.388940   79191 cri.go:89] found id: ""
	I0816 00:42:15.388966   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.388973   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:42:15.388983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:42:15.389042   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:42:15.424006   79191 cri.go:89] found id: ""
	I0816 00:42:15.424035   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.424043   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:42:15.424054   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:42:15.424073   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:42:15.504823   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:42:15.504846   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:42:15.504858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:42:15.608927   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:42:15.608959   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:42:15.676785   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:42:15.676810   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:42:15.744763   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:42:15.744805   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0816 00:42:15.760944   79191 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:42:15.761012   79191 out.go:270] * 
	W0816 00:42:15.761078   79191 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.761098   79191 out.go:270] * 
	W0816 00:42:15.762220   79191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:42:15.765697   79191 out.go:201] 
	W0816 00:42:15.766942   79191 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.767018   79191 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:42:15.767040   79191 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:42:15.768526   79191 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.647359933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723768937647333551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69ec328d-3373-4bd0-b6f6-273fd3b72bf6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.647975187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=639f7549-602b-4e18-a20a-1225162cf547 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.648039559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=639f7549-602b-4e18-a20a-1225162cf547 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.648080732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=639f7549-602b-4e18-a20a-1225162cf547 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.681719763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6869adfd-5310-4d41-abd9-bc02212310d3 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.681812300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6869adfd-5310-4d41-abd9-bc02212310d3 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.683059212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9f0167a-e78f-41a9-bbe3-0a5bf30d2dc5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.683512326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723768937683485552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9f0167a-e78f-41a9-bbe3-0a5bf30d2dc5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.684135521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1c01206-1595-4912-9a71-933d92714353 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.684209109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1c01206-1595-4912-9a71-933d92714353 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.684244055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a1c01206-1595-4912-9a71-933d92714353 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.716324878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68f00b2f-28eb-4b3c-ae1e-c1b08b1ee900 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.716410680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68f00b2f-28eb-4b3c-ae1e-c1b08b1ee900 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.717584046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61255c33-1f55-4c85-903d-e9872420751f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.717977970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723768937717952376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61255c33-1f55-4c85-903d-e9872420751f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.718619721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd88296b-4502-40f6-8b25-f128469da965 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.718684331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd88296b-4502-40f6-8b25-f128469da965 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.718726623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dd88296b-4502-40f6-8b25-f128469da965 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.750522209Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79aacd1a-3fc3-4bb5-a748-8ce659efa1af name=/runtime.v1.RuntimeService/Version
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.750616959Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79aacd1a-3fc3-4bb5-a748-8ce659efa1af name=/runtime.v1.RuntimeService/Version
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.751695999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b40ec7a-2e36-47bc-8ba6-ae6265b7e65e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.752049501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723768937752024199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b40ec7a-2e36-47bc-8ba6-ae6265b7e65e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.752728292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=324d72fe-2044-42ec-a198-e6bcecda703b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.752795122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=324d72fe-2044-42ec-a198-e6bcecda703b name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:42:17 old-k8s-version-098619 crio[650]: time="2024-08-16 00:42:17.752827179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=324d72fe-2044-42ec-a198-e6bcecda703b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 00:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055820] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042316] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.997792] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.610931] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.386268] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 00:34] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.149906] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.218773] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.113453] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.292715] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.582198] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.063869] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.975940] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	[ +13.278959] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 00:38] systemd-fstab-generator[5083]: Ignoring "noauto" option for root device
	[Aug16 00:40] systemd-fstab-generator[5363]: Ignoring "noauto" option for root device
	[  +0.062259] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:42:17 up 8 min,  0 users,  load average: 0.00, 0.08, 0.07
	Linux old-k8s-version-098619 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000e81c0, 0xc000b921b0, 0x1, 0x0, 0x0)
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000200a80)
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]: goroutine 127 [select]:
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000c9e6e0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000de7500, 0x0, 0x0)
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000200a80)
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 16 00:42:14 old-k8s-version-098619 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 16 00:42:14 old-k8s-version-098619 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 16 00:42:14 old-k8s-version-098619 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 16 00:42:15 old-k8s-version-098619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 16 00:42:15 old-k8s-version-098619 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 16 00:42:15 old-k8s-version-098619 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 16 00:42:15 old-k8s-version-098619 kubelet[5595]: I0816 00:42:15.727365    5595 server.go:416] Version: v1.20.0
	Aug 16 00:42:15 old-k8s-version-098619 kubelet[5595]: I0816 00:42:15.727695    5595 server.go:837] Client rotation is on, will bootstrap in background
	Aug 16 00:42:15 old-k8s-version-098619 kubelet[5595]: I0816 00:42:15.729598    5595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 16 00:42:15 old-k8s-version-098619 kubelet[5595]: W0816 00:42:15.730493    5595 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 16 00:42:15 old-k8s-version-098619 kubelet[5595]: I0816 00:42:15.730897    5595 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 2 (223.498627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-098619" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (747.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-758469 -n embed-certs-758469
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 00:47:07.278712124 +0000 UTC m=+6101.859291088
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-758469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-758469 logs -n 25: (2.076791249s)
E0816 00:47:09.800825   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-697641 sudo cat                              | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo find                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo crio                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-697641                                       | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-067133 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | disable-driver-mounts-067133                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:25 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-819398             | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC | 16 Aug 24 00:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-758469            | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-616827  | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:29:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:29:51.785297   79191 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:29:51.785388   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785392   79191 out.go:358] Setting ErrFile to fd 2...
	I0816 00:29:51.785396   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785578   79191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:29:51.786145   79191 out.go:352] Setting JSON to false
	I0816 00:29:51.787066   79191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7892,"bootTime":1723760300,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:29:51.787122   79191 start.go:139] virtualization: kvm guest
	I0816 00:29:51.789057   79191 out.go:177] * [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:29:51.790274   79191 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:29:51.790269   79191 notify.go:220] Checking for updates...
	I0816 00:29:51.792828   79191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:29:51.794216   79191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:29:51.795553   79191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:29:51.796761   79191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:29:51.798018   79191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:29:51.799561   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:29:51.799935   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.799990   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.814617   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0816 00:29:51.815056   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.815584   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.815606   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.815933   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.816131   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:51.817809   79191 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 00:29:51.819204   79191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:29:51.819604   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.819652   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.834270   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0816 00:29:51.834584   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.834992   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.835015   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.835303   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.835478   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:49.226097   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:51.870472   79191 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:29:51.872031   79191 start.go:297] selected driver: kvm2
	I0816 00:29:51.872049   79191 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.872137   79191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:29:51.872785   79191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.872848   79191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:29:51.887731   79191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:29:51.888078   79191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:29:51.888141   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:29:51.888154   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:29:51.888203   79191 start.go:340] cluster config:
	{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.888300   79191 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.890190   79191 out.go:177] * Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	I0816 00:29:51.891529   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:29:51.891557   79191 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:29:51.891565   79191 cache.go:56] Caching tarball of preloaded images
	I0816 00:29:51.891645   79191 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:29:51.891664   79191 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:29:51.891747   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:29:51.891915   79191 start.go:360] acquireMachinesLock for old-k8s-version-098619: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:29:55.306158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:58.378266   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:04.458137   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:07.530158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:13.610160   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:16.682057   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:22.762088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:25.834157   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:31.914106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:34.986091   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:41.066143   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:44.138152   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:50.218140   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:53.290166   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:59.370080   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:02.442130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:08.522126   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:11.594144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:17.674104   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:20.746185   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:26.826131   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:29.898113   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:35.978100   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:39.050136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:45.130120   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:48.202078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:54.282078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:57.354088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:03.434136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:06.506153   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:12.586125   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:15.658144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:21.738130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:24.810191   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:30.890130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:33.962132   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:40.042062   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:43.114154   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:49.194151   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:52.266130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:58.346106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:01.418139   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:04.422042   78713 start.go:364] duration metric: took 4m25.166768519s to acquireMachinesLock for "embed-certs-758469"
	I0816 00:33:04.422099   78713 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:04.422107   78713 fix.go:54] fixHost starting: 
	I0816 00:33:04.422426   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:04.422458   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:04.437335   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I0816 00:33:04.437779   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:04.438284   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:04.438306   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:04.438646   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:04.438873   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:04.439045   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:04.440597   78713 fix.go:112] recreateIfNeeded on embed-certs-758469: state=Stopped err=<nil>
	I0816 00:33:04.440627   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	W0816 00:33:04.440781   78713 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:04.442527   78713 out.go:177] * Restarting existing kvm2 VM for "embed-certs-758469" ...
	I0816 00:33:04.419735   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:04.419772   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420077   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:33:04.420102   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420299   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:33:04.421914   78489 machine.go:96] duration metric: took 4m37.429789672s to provisionDockerMachine
	I0816 00:33:04.421957   78489 fix.go:56] duration metric: took 4m37.451098771s for fixHost
	I0816 00:33:04.421965   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 4m37.451130669s
	W0816 00:33:04.421995   78489 start.go:714] error starting host: provision: host is not running
	W0816 00:33:04.422099   78489 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 00:33:04.422111   78489 start.go:729] Will try again in 5 seconds ...
	I0816 00:33:04.443838   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Start
	I0816 00:33:04.444035   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring networks are active...
	I0816 00:33:04.444849   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network default is active
	I0816 00:33:04.445168   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network mk-embed-certs-758469 is active
	I0816 00:33:04.445491   78713 main.go:141] libmachine: (embed-certs-758469) Getting domain xml...
	I0816 00:33:04.446159   78713 main.go:141] libmachine: (embed-certs-758469) Creating domain...
	I0816 00:33:05.654817   78713 main.go:141] libmachine: (embed-certs-758469) Waiting to get IP...
	I0816 00:33:05.655625   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.656020   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.656064   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.655983   79868 retry.go:31] will retry after 273.341379ms: waiting for machine to come up
	I0816 00:33:05.930542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.931038   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.931061   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.931001   79868 retry.go:31] will retry after 320.172619ms: waiting for machine to come up
	I0816 00:33:06.252718   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.253117   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.253140   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.253091   79868 retry.go:31] will retry after 441.386495ms: waiting for machine to come up
	I0816 00:33:06.695681   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.696108   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.696134   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.696065   79868 retry.go:31] will retry after 491.272986ms: waiting for machine to come up
	I0816 00:33:07.188683   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.189070   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.189092   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.189025   79868 retry.go:31] will retry after 536.865216ms: waiting for machine to come up
	I0816 00:33:07.727831   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.728246   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.728276   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.728193   79868 retry.go:31] will retry after 813.064342ms: waiting for machine to come up
	I0816 00:33:08.543096   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:08.543605   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:08.543637   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:08.543549   79868 retry.go:31] will retry after 1.00495091s: waiting for machine to come up
	I0816 00:33:09.424586   78489 start.go:360] acquireMachinesLock for no-preload-819398: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:33:09.549815   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:09.550226   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:09.550255   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:09.550175   79868 retry.go:31] will retry after 1.483015511s: waiting for machine to come up
	I0816 00:33:11.034879   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:11.035277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:11.035315   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:11.035224   79868 retry.go:31] will retry after 1.513237522s: waiting for machine to come up
	I0816 00:33:12.550817   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:12.551172   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:12.551196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:12.551126   79868 retry.go:31] will retry after 1.483165174s: waiting for machine to come up
	I0816 00:33:14.036748   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:14.037142   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:14.037170   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:14.037087   79868 retry.go:31] will retry after 1.772679163s: waiting for machine to come up
	I0816 00:33:15.811699   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:15.812300   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:15.812334   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:15.812226   79868 retry.go:31] will retry after 3.026936601s: waiting for machine to come up
	I0816 00:33:18.842362   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:18.842759   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:18.842788   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:18.842715   79868 retry.go:31] will retry after 4.400445691s: waiting for machine to come up
	I0816 00:33:23.247813   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248223   78713 main.go:141] libmachine: (embed-certs-758469) Found IP for machine: 192.168.39.185
	I0816 00:33:23.248254   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has current primary IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248265   78713 main.go:141] libmachine: (embed-certs-758469) Reserving static IP address...
	I0816 00:33:23.248613   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.248641   78713 main.go:141] libmachine: (embed-certs-758469) DBG | skip adding static IP to network mk-embed-certs-758469 - found existing host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"}
	I0816 00:33:23.248654   78713 main.go:141] libmachine: (embed-certs-758469) Reserved static IP address: 192.168.39.185
	I0816 00:33:23.248673   78713 main.go:141] libmachine: (embed-certs-758469) Waiting for SSH to be available...
	I0816 00:33:23.248687   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Getting to WaitForSSH function...
	I0816 00:33:23.250607   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.250931   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.250965   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.251113   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH client type: external
	I0816 00:33:23.251141   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa (-rw-------)
	I0816 00:33:23.251179   78713 main.go:141] libmachine: (embed-certs-758469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:23.251196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | About to run SSH command:
	I0816 00:33:23.251211   78713 main.go:141] libmachine: (embed-certs-758469) DBG | exit 0
	I0816 00:33:23.373899   78713 main.go:141] libmachine: (embed-certs-758469) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:23.374270   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetConfigRaw
	I0816 00:33:23.374914   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.377034   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377343   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.377370   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377561   78713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/config.json ...
	I0816 00:33:23.377760   78713 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:23.377776   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:23.378014   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.379950   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380248   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.380277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380369   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.380524   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380668   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380795   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.380950   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.381134   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.381145   78713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:23.486074   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:23.486106   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486462   78713 buildroot.go:166] provisioning hostname "embed-certs-758469"
	I0816 00:33:23.486491   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486677   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.489520   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.489905   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.489924   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.490108   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.490279   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490427   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490566   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.490730   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.490901   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.490920   78713 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-758469 && echo "embed-certs-758469" | sudo tee /etc/hostname
	I0816 00:33:23.614635   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-758469
	
	I0816 00:33:23.614671   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.617308   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617673   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.617701   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617881   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.618087   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618351   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.618536   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.618721   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.618746   78713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-758469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-758469/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-758469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:23.734901   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:23.734931   78713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:23.734946   78713 buildroot.go:174] setting up certificates
	I0816 00:33:23.734953   78713 provision.go:84] configureAuth start
	I0816 00:33:23.734961   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.735255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.737952   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738312   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.738341   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738445   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.740589   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.740926   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.740953   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.741060   78713 provision.go:143] copyHostCerts
	I0816 00:33:23.741121   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:23.741138   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:23.741203   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:23.741357   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:23.741367   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:23.741393   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:23.741452   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:23.741458   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:23.741478   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:23.741525   78713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.embed-certs-758469 san=[127.0.0.1 192.168.39.185 embed-certs-758469 localhost minikube]
	I0816 00:33:23.871115   78713 provision.go:177] copyRemoteCerts
	I0816 00:33:23.871167   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:23.871190   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.874049   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874505   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.874538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874720   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.874913   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.875079   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.875210   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:23.959910   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:23.984454   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:33:24.009067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:24.036195   78713 provision.go:87] duration metric: took 301.229994ms to configureAuth
	I0816 00:33:24.036218   78713 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:24.036389   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:24.036453   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.039196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.039562   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039771   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.039970   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040125   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040224   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.040372   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.040584   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.040612   78713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:24.550693   78747 start.go:364] duration metric: took 4m44.527028624s to acquireMachinesLock for "default-k8s-diff-port-616827"
	I0816 00:33:24.550757   78747 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:24.550763   78747 fix.go:54] fixHost starting: 
	I0816 00:33:24.551164   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:24.551203   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:24.567741   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0816 00:33:24.568138   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:24.568674   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:33:24.568703   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:24.569017   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:24.569212   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:24.569385   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:33:24.570856   78747 fix.go:112] recreateIfNeeded on default-k8s-diff-port-616827: state=Stopped err=<nil>
	I0816 00:33:24.570901   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	W0816 00:33:24.571074   78747 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:24.572673   78747 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-616827" ...
	I0816 00:33:24.574220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Start
	I0816 00:33:24.574403   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring networks are active...
	I0816 00:33:24.575086   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network default is active
	I0816 00:33:24.575528   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network mk-default-k8s-diff-port-616827 is active
	I0816 00:33:24.576033   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Getting domain xml...
	I0816 00:33:24.576734   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Creating domain...
	I0816 00:33:24.314921   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:24.314951   78713 machine.go:96] duration metric: took 937.178488ms to provisionDockerMachine
	I0816 00:33:24.314964   78713 start.go:293] postStartSetup for "embed-certs-758469" (driver="kvm2")
	I0816 00:33:24.314974   78713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:24.315007   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.315405   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:24.315430   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.317962   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318242   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.318270   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318390   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.318588   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.318763   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.318900   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.400628   78713 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:24.405061   78713 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:24.405082   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:24.405148   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:24.405215   78713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:24.405302   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:24.414985   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:24.439646   78713 start.go:296] duration metric: took 124.668147ms for postStartSetup
	I0816 00:33:24.439692   78713 fix.go:56] duration metric: took 20.017583324s for fixHost
	I0816 00:33:24.439719   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.442551   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.442920   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.442954   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.443051   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.443257   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443434   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443567   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.443740   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.443912   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.443921   78713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:24.550562   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768404.525876526
	
	I0816 00:33:24.550588   78713 fix.go:216] guest clock: 1723768404.525876526
	I0816 00:33:24.550599   78713 fix.go:229] Guest: 2024-08-16 00:33:24.525876526 +0000 UTC Remote: 2024-08-16 00:33:24.439696953 +0000 UTC m=+285.318245053 (delta=86.179573ms)
	I0816 00:33:24.550618   78713 fix.go:200] guest clock delta is within tolerance: 86.179573ms
	I0816 00:33:24.550623   78713 start.go:83] releasing machines lock for "embed-certs-758469", held for 20.128541713s
	I0816 00:33:24.550647   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.551090   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:24.554013   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554358   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.554382   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554572   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555062   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555222   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555279   78713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:24.555330   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.555441   78713 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:24.555463   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.558216   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558368   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558567   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558719   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558723   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558742   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558925   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559074   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559205   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559285   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.559329   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.656926   78713 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:24.662590   78713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:24.811290   78713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:24.817486   78713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:24.817570   78713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:24.838317   78713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:24.838342   78713 start.go:495] detecting cgroup driver to use...
	I0816 00:33:24.838396   78713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:24.856294   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:24.875603   78713 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:24.875650   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:24.890144   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:24.904327   78713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:25.018130   78713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:25.149712   78713 docker.go:233] disabling docker service ...
	I0816 00:33:25.149795   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:25.165494   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:25.179554   78713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:25.330982   78713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:25.476436   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:25.493242   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:25.515688   78713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:25.515762   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.529924   78713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:25.529997   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.541412   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.551836   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.563356   78713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:25.574486   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.585533   78713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.604169   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.615335   78713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:25.629366   78713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:25.629427   78713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:25.645937   78713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:25.657132   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:25.771891   78713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:25.914817   78713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:25.914904   78713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:25.919572   78713 start.go:563] Will wait 60s for crictl version
	I0816 00:33:25.919620   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:33:25.923419   78713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:25.969387   78713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:25.969484   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.002529   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.035709   78713 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:26.036921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:26.039638   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040001   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:26.040023   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040254   78713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:26.044444   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:26.057172   78713 kubeadm.go:883] updating cluster {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:26.057326   78713 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:26.057382   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:26.093950   78713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:26.094031   78713 ssh_runner.go:195] Run: which lz4
	I0816 00:33:26.097998   78713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:26.102152   78713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:26.102183   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:27.538323   78713 crio.go:462] duration metric: took 1.440354469s to copy over tarball
	I0816 00:33:27.538400   78713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:25.885210   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting to get IP...
	I0816 00:33:25.886135   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886555   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:25.886538   80004 retry.go:31] will retry after 214.751664ms: waiting for machine to come up
	I0816 00:33:26.103182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103652   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103677   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.103603   80004 retry.go:31] will retry after 239.667632ms: waiting for machine to come up
	I0816 00:33:26.345223   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345776   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.345701   80004 retry.go:31] will retry after 474.740445ms: waiting for machine to come up
	I0816 00:33:26.822224   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822682   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822716   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.822639   80004 retry.go:31] will retry after 574.324493ms: waiting for machine to come up
	I0816 00:33:27.398433   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398939   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398971   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.398904   80004 retry.go:31] will retry after 567.388033ms: waiting for machine to come up
	I0816 00:33:27.967686   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968225   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.968093   80004 retry.go:31] will retry after 940.450394ms: waiting for machine to come up
	I0816 00:33:28.910549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911058   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911088   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:28.911031   80004 retry.go:31] will retry after 919.494645ms: waiting for machine to come up
	I0816 00:33:29.832687   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833204   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833244   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:29.833189   80004 retry.go:31] will retry after 1.332024716s: waiting for machine to come up
	I0816 00:33:29.677224   78713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.138774475s)
	I0816 00:33:29.677252   78713 crio.go:469] duration metric: took 2.138901242s to extract the tarball
	I0816 00:33:29.677261   78713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:29.716438   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:29.768597   78713 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:29.768622   78713 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:29.768634   78713 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.0 crio true true} ...
	I0816 00:33:29.768787   78713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-758469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:29.768874   78713 ssh_runner.go:195] Run: crio config
	I0816 00:33:29.813584   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:29.813607   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:29.813620   78713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:29.813644   78713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-758469 NodeName:embed-certs-758469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:29.813776   78713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-758469"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:29.813862   78713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:29.825680   78713 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:29.825744   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:29.836314   78713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 00:33:29.853030   78713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:29.869368   78713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 00:33:29.886814   78713 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:29.890644   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:29.903138   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:30.040503   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:30.058323   78713 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469 for IP: 192.168.39.185
	I0816 00:33:30.058351   78713 certs.go:194] generating shared ca certs ...
	I0816 00:33:30.058372   78713 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:30.058559   78713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:30.058624   78713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:30.058638   78713 certs.go:256] generating profile certs ...
	I0816 00:33:30.058778   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/client.key
	I0816 00:33:30.058873   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key.0d0e36ad
	I0816 00:33:30.058930   78713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key
	I0816 00:33:30.059101   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:30.059146   78713 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:30.059162   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:30.059197   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:30.059251   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:30.059285   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:30.059345   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:30.060202   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:30.098381   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:30.135142   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:30.175518   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:30.214349   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 00:33:30.249278   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:30.273772   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:30.298067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:30.324935   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:30.351149   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:30.375636   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:30.399250   78713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:30.417646   78713 ssh_runner.go:195] Run: openssl version
	I0816 00:33:30.423691   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:30.435254   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439651   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439700   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.445673   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:30.456779   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:30.467848   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472199   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472274   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.478109   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:30.489481   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:30.500747   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505116   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505162   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.510739   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:30.521829   78713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:30.526444   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:30.532373   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:30.538402   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:30.544697   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:30.550762   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:30.556573   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:30.562513   78713 kubeadm.go:392] StartCluster: {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:30.562602   78713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:30.562650   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.607119   78713 cri.go:89] found id: ""
	I0816 00:33:30.607197   78713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:30.617798   78713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:30.617818   78713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:30.617873   78713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:30.627988   78713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:30.628976   78713 kubeconfig.go:125] found "embed-certs-758469" server: "https://192.168.39.185:8443"
	I0816 00:33:30.631601   78713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:30.642001   78713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.185
	I0816 00:33:30.642036   78713 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:30.642047   78713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:30.642088   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.685946   78713 cri.go:89] found id: ""
	I0816 00:33:30.686049   78713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:30.704130   78713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:30.714467   78713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:30.714490   78713 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:30.714534   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:33:30.723924   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:30.723985   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:30.733804   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:33:30.743345   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:30.743412   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:30.753604   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.763271   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:30.763340   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.773121   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:33:30.782507   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:30.782565   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:30.792652   78713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:30.802523   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:30.923193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.206424   78713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.283195087s)
	I0816 00:33:32.206449   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.435275   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.509193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.590924   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:32.591020   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.091804   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.591198   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.607568   78713 api_server.go:72] duration metric: took 1.016656713s to wait for apiserver process to appear ...
	I0816 00:33:33.607596   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:33.607619   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:31.166506   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166900   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166927   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:31.166860   80004 retry.go:31] will retry after 1.213971674s: waiting for machine to come up
	I0816 00:33:32.382376   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382862   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382889   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:32.382821   80004 retry.go:31] will retry after 2.115615681s: waiting for machine to come up
	I0816 00:33:34.501236   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501697   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:34.501646   80004 retry.go:31] will retry after 2.495252025s: waiting for machine to come up
	I0816 00:33:36.334341   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.334374   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.334389   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.351971   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.352011   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.608364   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.614582   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:36.614619   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.107654   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.113352   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.113384   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.607902   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.614677   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.614710   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.108329   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.112493   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.112521   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.608061   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.613134   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.613172   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.107667   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.111920   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:39.111954   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.608190   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.613818   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:33:39.619467   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:39.619490   78713 api_server.go:131] duration metric: took 6.011887872s to wait for apiserver health ...
	I0816 00:33:39.619499   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:39.619504   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:39.621572   78713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:36.999158   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999616   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999645   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:36.999576   80004 retry.go:31] will retry after 2.736710806s: waiting for machine to come up
	I0816 00:33:39.737818   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738286   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738320   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:39.738215   80004 retry.go:31] will retry after 3.3205645s: waiting for machine to come up
	I0816 00:33:39.623254   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:39.633910   78713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:39.653736   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:39.663942   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:39.663983   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:39.663994   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:39.664044   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:39.664060   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:39.664067   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:33:39.664078   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:39.664089   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:39.664107   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:33:39.664118   78713 system_pods.go:74] duration metric: took 10.358906ms to wait for pod list to return data ...
	I0816 00:33:39.664127   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:39.667639   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:39.667669   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:39.667682   78713 node_conditions.go:105] duration metric: took 3.547018ms to run NodePressure ...
	I0816 00:33:39.667701   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:39.929620   78713 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934264   78713 kubeadm.go:739] kubelet initialised
	I0816 00:33:39.934289   78713 kubeadm.go:740] duration metric: took 4.64037ms waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934299   78713 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:39.938771   78713 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.943735   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943760   78713 pod_ready.go:82] duration metric: took 4.962601ms for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.943772   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943781   78713 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.947900   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947925   78713 pod_ready.go:82] duration metric: took 4.129605ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.947936   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947943   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.953367   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953400   78713 pod_ready.go:82] duration metric: took 5.445682ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.953412   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953422   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.057510   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057533   78713 pod_ready.go:82] duration metric: took 104.099944ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.057543   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057548   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.458355   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458389   78713 pod_ready.go:82] duration metric: took 400.832009ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.458400   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458408   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.857939   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857964   78713 pod_ready.go:82] duration metric: took 399.549123ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.857974   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857980   78713 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:41.257101   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257126   78713 pod_ready.go:82] duration metric: took 399.13078ms for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:41.257135   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257142   78713 pod_ready.go:39] duration metric: took 1.322827054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:41.257159   78713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:33:41.269076   78713 ops.go:34] apiserver oom_adj: -16
	I0816 00:33:41.269098   78713 kubeadm.go:597] duration metric: took 10.651273415s to restartPrimaryControlPlane
	I0816 00:33:41.269107   78713 kubeadm.go:394] duration metric: took 10.706599955s to StartCluster
	I0816 00:33:41.269127   78713 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.269191   78713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:33:41.271380   78713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.271679   78713 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:33:41.271714   78713 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:33:41.271812   78713 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-758469"
	I0816 00:33:41.271834   78713 addons.go:69] Setting default-storageclass=true in profile "embed-certs-758469"
	I0816 00:33:41.271845   78713 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-758469"
	W0816 00:33:41.271858   78713 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:33:41.271874   78713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-758469"
	I0816 00:33:41.271882   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:41.271891   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.271860   78713 addons.go:69] Setting metrics-server=true in profile "embed-certs-758469"
	I0816 00:33:41.271934   78713 addons.go:234] Setting addon metrics-server=true in "embed-certs-758469"
	W0816 00:33:41.271952   78713 addons.go:243] addon metrics-server should already be in state true
	I0816 00:33:41.272022   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.272324   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272575   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272604   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272704   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272718   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272745   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.274599   78713 out.go:177] * Verifying Kubernetes components...
	I0816 00:33:41.276283   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:41.292526   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0816 00:33:41.292560   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0816 00:33:41.292556   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I0816 00:33:41.293000   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293053   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293004   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293482   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293499   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293592   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293606   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293625   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293607   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293891   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293939   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293976   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.294132   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.294475   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294483   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294517   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.294522   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.297714   78713 addons.go:234] Setting addon default-storageclass=true in "embed-certs-758469"
	W0816 00:33:41.297747   78713 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:33:41.297787   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.298192   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.298238   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.310002   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0816 00:33:41.310000   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0816 00:33:41.310469   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310521   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310899   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.310917   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311027   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.311048   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311293   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311476   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.311491   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311642   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.313614   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.313697   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.315474   78713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:33:41.315484   78713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:33:41.316719   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0816 00:33:41.316887   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:33:41.316902   78713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:33:41.316921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.316975   78713 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.316985   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:33:41.316995   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.317061   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.317572   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.317594   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.317941   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.318669   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.318702   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.320288   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320668   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.320695   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320726   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320939   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321241   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.321267   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.321402   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321497   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.321547   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321592   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.321883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.322021   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.334230   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0816 00:33:41.334580   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.335088   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.335107   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.335387   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.335549   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.336891   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.337084   78713 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.337100   78713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:33:41.337115   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.340204   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340667   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.340697   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340837   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.340987   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.341120   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.341277   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.476131   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:41.502242   78713 node_ready.go:35] waiting up to 6m0s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:41.559562   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.575913   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:33:41.575937   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:33:41.614763   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:33:41.614784   78713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:33:41.628658   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.670367   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:41.670393   78713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:33:41.746638   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:42.849125   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.22043382s)
	I0816 00:33:42.849189   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849202   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849397   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.289807606s)
	I0816 00:33:42.849438   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849448   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849478   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849514   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849524   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849538   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849550   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849761   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849803   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849813   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849825   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849833   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.850018   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850033   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.850059   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850059   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.850078   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856398   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.856419   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.856647   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.856667   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856676   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901261   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1545817s)
	I0816 00:33:42.901314   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901329   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901619   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901680   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901694   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901704   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901713   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901953   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901973   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901986   78713 addons.go:475] Verifying addon metrics-server=true in "embed-certs-758469"
	I0816 00:33:42.904677   78713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 00:33:42.905802   78713 addons.go:510] duration metric: took 1.634089536s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 00:33:43.506584   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:44.254575   79191 start.go:364] duration metric: took 3m52.362627542s to acquireMachinesLock for "old-k8s-version-098619"
	I0816 00:33:44.254648   79191 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:44.254659   79191 fix.go:54] fixHost starting: 
	I0816 00:33:44.255099   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:44.255137   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:44.271236   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0816 00:33:44.271591   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:44.272030   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:33:44.272052   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:44.272328   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:44.272503   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:33:44.272660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetState
	I0816 00:33:44.274235   79191 fix.go:112] recreateIfNeeded on old-k8s-version-098619: state=Stopped err=<nil>
	I0816 00:33:44.274272   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	W0816 00:33:44.274415   79191 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:44.275978   79191 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-098619" ...
	I0816 00:33:43.059949   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060413   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Found IP for machine: 192.168.50.128
	I0816 00:33:43.060440   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserving static IP address...
	I0816 00:33:43.060479   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has current primary IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060881   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.060906   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | skip adding static IP to network mk-default-k8s-diff-port-616827 - found existing host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"}
	I0816 00:33:43.060921   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserved static IP address: 192.168.50.128
	I0816 00:33:43.060937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for SSH to be available...
	I0816 00:33:43.060952   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Getting to WaitForSSH function...
	I0816 00:33:43.063249   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063552   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.063592   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH client type: external
	I0816 00:33:43.063833   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa (-rw-------)
	I0816 00:33:43.063877   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:43.063896   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | About to run SSH command:
	I0816 00:33:43.063905   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | exit 0
	I0816 00:33:43.185986   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:43.186338   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetConfigRaw
	I0816 00:33:43.186944   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.189324   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189617   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.189643   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189890   78747 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/config.json ...
	I0816 00:33:43.190166   78747 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:43.190192   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:43.190401   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.192515   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192836   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.192865   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192940   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.193118   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193280   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193454   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.193614   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.193812   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.193825   78747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:43.290143   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:43.290168   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290395   78747 buildroot.go:166] provisioning hostname "default-k8s-diff-port-616827"
	I0816 00:33:43.290422   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290603   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.293231   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.293665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293829   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.294038   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294195   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294325   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.294479   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.294685   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.294703   78747 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-616827 && echo "default-k8s-diff-port-616827" | sudo tee /etc/hostname
	I0816 00:33:43.406631   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-616827
	
	I0816 00:33:43.406655   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.409271   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409610   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.409641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409794   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.409984   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410160   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.410491   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.410670   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.410695   78747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-616827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-616827/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-616827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:43.515766   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:43.515796   78747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:43.515829   78747 buildroot.go:174] setting up certificates
	I0816 00:33:43.515841   78747 provision.go:84] configureAuth start
	I0816 00:33:43.515850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.516128   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.518730   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519055   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.519087   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.521186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.521538   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521691   78747 provision.go:143] copyHostCerts
	I0816 00:33:43.521746   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:43.521764   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:43.521822   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:43.521949   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:43.521959   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:43.521982   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:43.522050   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:43.522057   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:43.522074   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:43.522132   78747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-616827 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-616827 localhost minikube]
	I0816 00:33:43.601126   78747 provision.go:177] copyRemoteCerts
	I0816 00:33:43.601179   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:43.601203   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.603816   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604148   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.604180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.604549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.604725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.604863   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:43.686829   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:43.712297   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 00:33:43.738057   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:43.762820   78747 provision.go:87] duration metric: took 246.967064ms to configureAuth
	I0816 00:33:43.762853   78747 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:43.763069   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:43.763155   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.765886   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766256   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.766287   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766447   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.766641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766813   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.767164   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.767318   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.767334   78747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:44.025337   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:44.025373   78747 machine.go:96] duration metric: took 835.190539ms to provisionDockerMachine
	I0816 00:33:44.025387   78747 start.go:293] postStartSetup for "default-k8s-diff-port-616827" (driver="kvm2")
	I0816 00:33:44.025401   78747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:44.025416   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.025780   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:44.025804   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.028307   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028591   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.028618   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028740   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.028925   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.029117   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.029281   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.109481   78747 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:44.115290   78747 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:44.115317   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:44.115388   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:44.115482   78747 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:44.115597   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:44.128677   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:44.154643   78747 start.go:296] duration metric: took 129.242138ms for postStartSetup
	I0816 00:33:44.154685   78747 fix.go:56] duration metric: took 19.603921801s for fixHost
	I0816 00:33:44.154705   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.157477   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.157907   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.157937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.158051   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.158264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158411   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158580   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.158757   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:44.158981   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:44.158996   78747 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:44.254419   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768424.226223949
	
	I0816 00:33:44.254443   78747 fix.go:216] guest clock: 1723768424.226223949
	I0816 00:33:44.254452   78747 fix.go:229] Guest: 2024-08-16 00:33:44.226223949 +0000 UTC Remote: 2024-08-16 00:33:44.154688835 +0000 UTC m=+304.265683075 (delta=71.535114ms)
	I0816 00:33:44.254476   78747 fix.go:200] guest clock delta is within tolerance: 71.535114ms
	I0816 00:33:44.254482   78747 start.go:83] releasing machines lock for "default-k8s-diff-port-616827", held for 19.703745588s
	I0816 00:33:44.254504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.254750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:44.257516   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.257879   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.257910   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.258111   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258828   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258908   78747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:44.258946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.259033   78747 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:44.259048   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.261566   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261814   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261978   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262008   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262112   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262145   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262254   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262390   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262442   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262502   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.262549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262642   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.346934   78747 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:44.370413   78747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:44.519130   78747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:44.525276   78747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:44.525344   78747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:44.549125   78747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:44.549154   78747 start.go:495] detecting cgroup driver to use...
	I0816 00:33:44.549227   78747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:44.575221   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:44.592214   78747 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:44.592270   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:44.607403   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:44.629127   78747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:44.786185   78747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:44.954426   78747 docker.go:233] disabling docker service ...
	I0816 00:33:44.954495   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:44.975169   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:44.994113   78747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:45.142572   78747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:45.297255   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:45.313401   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:45.334780   78747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:45.334851   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.346039   78747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:45.346111   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.357681   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.368607   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.381164   78747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:45.394060   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.406010   78747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.424720   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.437372   78747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:45.450515   78747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:45.450595   78747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:45.465740   78747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:45.476568   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:45.629000   78747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:45.781044   78747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:45.781142   78747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:45.787480   78747 start.go:563] Will wait 60s for crictl version
	I0816 00:33:45.787551   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:33:45.791907   78747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:45.836939   78747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:45.837025   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.869365   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.907162   78747 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:44.277288   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .Start
	I0816 00:33:44.277426   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring networks are active...
	I0816 00:33:44.278141   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network default is active
	I0816 00:33:44.278471   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network mk-old-k8s-version-098619 is active
	I0816 00:33:44.278820   79191 main.go:141] libmachine: (old-k8s-version-098619) Getting domain xml...
	I0816 00:33:44.279523   79191 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:33:45.643704   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting to get IP...
	I0816 00:33:45.644691   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.645213   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.645247   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.645162   80212 retry.go:31] will retry after 198.057532ms: waiting for machine to come up
	I0816 00:33:45.844756   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.845297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.845321   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.845247   80212 retry.go:31] will retry after 288.630433ms: waiting for machine to come up
	I0816 00:33:46.135913   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.136413   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.136442   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.136365   80212 retry.go:31] will retry after 456.48021ms: waiting for machine to come up
	I0816 00:33:46.594170   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.594649   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.594678   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.594592   80212 retry.go:31] will retry after 501.49137ms: waiting for machine to come up
	I0816 00:33:46.006040   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:47.007144   78713 node_ready.go:49] node "embed-certs-758469" has status "Ready":"True"
	I0816 00:33:47.007172   78713 node_ready.go:38] duration metric: took 5.504897396s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:47.007183   78713 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:47.014800   78713 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:49.022567   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:45.908518   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:45.912248   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.912762   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:45.912797   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.913115   78747 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:45.917917   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:45.935113   78747 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:45.935294   78747 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:45.935351   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:45.988031   78747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:45.988115   78747 ssh_runner.go:195] Run: which lz4
	I0816 00:33:45.992508   78747 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:45.997108   78747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:45.997199   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:47.459404   78747 crio.go:462] duration metric: took 1.466928999s to copy over tarball
	I0816 00:33:47.459478   78747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:49.621449   78747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16194292s)
	I0816 00:33:49.621484   78747 crio.go:469] duration metric: took 2.162054092s to extract the tarball
	I0816 00:33:49.621494   78747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:49.660378   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:49.709446   78747 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:49.709471   78747 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:49.709481   78747 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.0 crio true true} ...
	I0816 00:33:49.709609   78747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-616827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:49.709704   78747 ssh_runner.go:195] Run: crio config
	I0816 00:33:49.756470   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:49.756497   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:49.756510   78747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:49.756534   78747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-616827 NodeName:default-k8s-diff-port-616827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:49.756745   78747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-616827"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:49.756827   78747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:49.766769   78747 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:49.766840   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:49.776367   78747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 00:33:49.793191   78747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:49.811993   78747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 00:33:49.829787   78747 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:49.833673   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:49.846246   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:47.098130   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.098614   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.098645   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.098569   80212 retry.go:31] will retry after 663.568587ms: waiting for machine to come up
	I0816 00:33:47.763930   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.764447   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.764470   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.764376   80212 retry.go:31] will retry after 679.581678ms: waiting for machine to come up
	I0816 00:33:48.446082   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:48.446552   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:48.446579   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:48.446498   80212 retry.go:31] will retry after 1.090430732s: waiting for machine to come up
	I0816 00:33:49.538961   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:49.539454   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:49.539482   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:49.539397   80212 retry.go:31] will retry after 1.039148258s: waiting for machine to come up
	I0816 00:33:50.579642   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:50.580119   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:50.580144   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:50.580074   80212 retry.go:31] will retry after 1.440992413s: waiting for machine to come up
	I0816 00:33:51.788858   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:54.022577   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:49.963020   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:49.980142   78747 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827 for IP: 192.168.50.128
	I0816 00:33:49.980170   78747 certs.go:194] generating shared ca certs ...
	I0816 00:33:49.980192   78747 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:49.980408   78747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:49.980470   78747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:49.980489   78747 certs.go:256] generating profile certs ...
	I0816 00:33:49.980583   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/client.key
	I0816 00:33:49.980669   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key.2062a467
	I0816 00:33:49.980737   78747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key
	I0816 00:33:49.980891   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:49.980940   78747 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:49.980949   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:49.980984   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:49.981021   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:49.981050   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:49.981102   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:49.981835   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:50.014530   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:50.057377   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:50.085730   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:50.121721   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 00:33:50.166448   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:50.195059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:50.220059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:50.244288   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:50.268463   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:50.293203   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:50.318859   78747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:50.336625   78747 ssh_runner.go:195] Run: openssl version
	I0816 00:33:50.343301   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:50.355408   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360245   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360312   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.366435   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:50.377753   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:50.389482   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394337   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394419   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.400279   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:50.412410   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:50.424279   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429013   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429077   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.435095   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:50.448148   78747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:50.453251   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:50.459730   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:50.466145   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:50.472438   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:50.478701   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:50.485081   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:50.490958   78747 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:50.491091   78747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:50.491173   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.545458   78747 cri.go:89] found id: ""
	I0816 00:33:50.545532   78747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:50.557054   78747 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:50.557074   78747 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:50.557122   78747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:50.570313   78747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:50.571774   78747 kubeconfig.go:125] found "default-k8s-diff-port-616827" server: "https://192.168.50.128:8444"
	I0816 00:33:50.574969   78747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:50.586066   78747 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I0816 00:33:50.586101   78747 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:50.586114   78747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:50.586172   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.631347   78747 cri.go:89] found id: ""
	I0816 00:33:50.631416   78747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:50.651296   78747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:50.665358   78747 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:50.665387   78747 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:50.665427   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 00:33:50.678634   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:50.678706   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:50.690376   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 00:33:50.702070   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:50.702132   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:50.714117   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.725349   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:50.725413   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.735691   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 00:33:50.745524   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:50.745598   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:50.756310   78747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:50.771825   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:50.908593   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.046812   78747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138178717s)
	I0816 00:33:52.046863   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.282111   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.357877   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.485435   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:52.485531   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:52.985717   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.486461   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.522663   78747 api_server.go:72] duration metric: took 1.037234176s to wait for apiserver process to appear ...
	I0816 00:33:53.522692   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:53.522713   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:52.022573   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:52.023319   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:52.023352   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:52.023226   80212 retry.go:31] will retry after 1.814668747s: waiting for machine to come up
	I0816 00:33:53.839539   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:53.839916   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:53.839944   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:53.839861   80212 retry.go:31] will retry after 1.900379439s: waiting for machine to come up
	I0816 00:33:55.742480   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:55.742981   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:55.743004   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:55.742920   80212 retry.go:31] will retry after 2.798728298s: waiting for machine to come up
	I0816 00:33:56.782681   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.782714   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:56.782730   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:56.828595   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.828628   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:57.022870   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.028291   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.028326   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:57.522858   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.533079   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.533120   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.023304   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.029913   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:58.029948   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.523517   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.529934   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:33:58.536872   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:58.536898   78747 api_server.go:131] duration metric: took 5.014199256s to wait for apiserver health ...
	I0816 00:33:58.536907   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:58.536916   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:58.539004   78747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:54.522157   78713 pod_ready.go:93] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.522186   78713 pod_ready.go:82] duration metric: took 7.507358513s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.522201   78713 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529305   78713 pod_ready.go:93] pod "etcd-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.529323   78713 pod_ready.go:82] duration metric: took 7.114484ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529331   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536656   78713 pod_ready.go:93] pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.536688   78713 pod_ready.go:82] duration metric: took 7.349231ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536701   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542615   78713 pod_ready.go:93] pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.542637   78713 pod_ready.go:82] duration metric: took 5.927403ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542650   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548165   78713 pod_ready.go:93] pod "kube-proxy-4xc89" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.548188   78713 pod_ready.go:82] duration metric: took 5.530073ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548200   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919561   78713 pod_ready.go:93] pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.919586   78713 pod_ready.go:82] duration metric: took 371.377774ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919598   78713 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:56.925892   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.926811   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.540592   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:58.554493   78747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:58.594341   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:58.605247   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:58.605293   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:58.605304   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:58.605314   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:58.605329   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:58.605342   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:33:58.605351   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:58.605358   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:58.605363   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:33:58.605372   78747 system_pods.go:74] duration metric: took 11.009517ms to wait for pod list to return data ...
	I0816 00:33:58.605384   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:58.609964   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:58.609996   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:58.610007   78747 node_conditions.go:105] duration metric: took 4.615471ms to run NodePressure ...
	I0816 00:33:58.610025   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:58.930292   78747 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937469   78747 kubeadm.go:739] kubelet initialised
	I0816 00:33:58.937499   78747 kubeadm.go:740] duration metric: took 7.181814ms waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937509   78747 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:59.036968   78747 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.046554   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046589   78747 pod_ready.go:82] duration metric: took 9.589918ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.046601   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046618   78747 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.053621   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053654   78747 pod_ready.go:82] duration metric: took 7.022323ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.053669   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053678   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.065329   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065357   78747 pod_ready.go:82] duration metric: took 11.650757ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.065378   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065387   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.074595   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074627   78747 pod_ready.go:82] duration metric: took 9.230183ms for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.074643   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074657   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.399077   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399105   78747 pod_ready.go:82] duration metric: took 324.440722ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.399116   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399124   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.797130   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797158   78747 pod_ready.go:82] duration metric: took 398.024149ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.797169   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797176   78747 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:00.197929   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197961   78747 pod_ready.go:82] duration metric: took 400.777243ms for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:34:00.197976   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197992   78747 pod_ready.go:39] duration metric: took 1.260464876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:00.198024   78747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:34:00.210255   78747 ops.go:34] apiserver oom_adj: -16
	I0816 00:34:00.210278   78747 kubeadm.go:597] duration metric: took 9.653197586s to restartPrimaryControlPlane
	I0816 00:34:00.210302   78747 kubeadm.go:394] duration metric: took 9.719364617s to StartCluster
	I0816 00:34:00.210322   78747 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.210405   78747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:00.212730   78747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.213053   78747 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:34:00.213162   78747 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:34:00.213247   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:00.213277   78747 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213292   78747 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213305   78747 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213313   78747 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:34:00.213344   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213352   78747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-616827"
	I0816 00:34:00.213298   78747 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213413   78747 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213435   78747 addons.go:243] addon metrics-server should already be in state true
	I0816 00:34:00.213463   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213751   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213795   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213752   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213886   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213756   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213992   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.215058   78747 out.go:177] * Verifying Kubernetes components...
	I0816 00:34:00.216719   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:00.229428   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0816 00:34:00.229676   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0816 00:34:00.229881   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230164   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230522   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230538   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230689   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230727   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230850   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.231488   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.231512   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.231754   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.232394   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.232426   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.232909   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0816 00:34:00.233400   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.233959   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.233979   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.234368   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.234576   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.238180   78747 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.238203   78747 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:34:00.238230   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.238598   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.238642   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.249682   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0816 00:34:00.250163   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.250894   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.250919   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.251326   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0816 00:34:00.251324   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.251663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.251828   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.252294   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.252318   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.252863   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.253070   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.253746   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.254958   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.255056   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0816 00:34:00.255513   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.256043   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.256083   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.256121   78747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:00.256494   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.257255   78747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:34:00.257377   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.257422   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.259132   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:34:00.259154   78747 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:34:00.259176   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.259204   78747 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.259223   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:34:00.259241   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.263096   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263213   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263688   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263874   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263996   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264175   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264441   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.264511   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264695   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.274557   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0816 00:34:00.274984   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.275444   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.275463   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.275735   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.275946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.277509   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.277745   78747 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.277762   78747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:34:00.277782   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.280264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280660   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.280689   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280790   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.280982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.281140   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.281286   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.445986   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:00.465112   78747 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:00.568927   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.602693   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.620335   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:34:00.620355   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:34:00.667790   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:34:00.667810   78747 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:34:00.698510   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.698536   78747 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:34:00.723319   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.975635   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.975663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976006   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976007   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976030   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.976044   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.976075   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976347   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976340   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976376   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.983280   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.983304   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.983587   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.983586   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.983620   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.678707   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678733   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.678889   78747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.076166351s)
	I0816 00:34:01.678936   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678955   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679115   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679136   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679145   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679153   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679473   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679497   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679484   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679514   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679521   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679525   78747 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-616827"
	I0816 00:34:01.679528   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679537   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679544   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679821   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679862   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679887   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.683006   78747 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 00:33:58.543282   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:58.543753   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:58.543783   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:58.543689   80212 retry.go:31] will retry after 4.402812235s: waiting for machine to come up
	I0816 00:34:00.927244   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:03.428032   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:04.178649   78489 start.go:364] duration metric: took 54.753990439s to acquireMachinesLock for "no-preload-819398"
	I0816 00:34:04.178706   78489 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:34:04.178714   78489 fix.go:54] fixHost starting: 
	I0816 00:34:04.179124   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:04.179162   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:04.195783   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0816 00:34:04.196138   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:04.196590   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:34:04.196614   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:04.196962   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:04.197161   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:04.197303   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:34:04.198795   78489 fix.go:112] recreateIfNeeded on no-preload-819398: state=Stopped err=<nil>
	I0816 00:34:04.198814   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	W0816 00:34:04.198978   78489 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:34:04.200736   78489 out.go:177] * Restarting existing kvm2 VM for "no-preload-819398" ...
	I0816 00:34:01.684641   78747 addons.go:510] duration metric: took 1.471480873s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 00:34:02.473603   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:04.476035   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:02.951078   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951631   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has current primary IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951672   79191 main.go:141] libmachine: (old-k8s-version-098619) Found IP for machine: 192.168.72.137
	I0816 00:34:02.951687   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserving static IP address...
	I0816 00:34:02.952154   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserved static IP address: 192.168.72.137
	I0816 00:34:02.952186   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.952201   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting for SSH to be available...
	I0816 00:34:02.952224   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | skip adding static IP to network mk-old-k8s-version-098619 - found existing host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"}
	I0816 00:34:02.952236   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Getting to WaitForSSH function...
	I0816 00:34:02.954361   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954686   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.954715   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954791   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH client type: external
	I0816 00:34:02.954830   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa (-rw-------)
	I0816 00:34:02.954871   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:02.954890   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | About to run SSH command:
	I0816 00:34:02.954909   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | exit 0
	I0816 00:34:03.078035   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:03.078408   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:34:03.079002   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.081041   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081391   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.081489   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081566   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:34:03.081748   79191 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:03.081767   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.082007   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.084022   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084333   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.084357   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084499   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.084700   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.084867   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.085074   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.085266   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.085509   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.085525   79191 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:03.186066   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:03.186094   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186368   79191 buildroot.go:166] provisioning hostname "old-k8s-version-098619"
	I0816 00:34:03.186397   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186597   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.189330   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189658   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.189702   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189792   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.190004   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190185   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190344   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.190481   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.190665   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.190688   79191 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098619 && echo "old-k8s-version-098619" | sudo tee /etc/hostname
	I0816 00:34:03.304585   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098619
	
	I0816 00:34:03.304608   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.307415   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307732   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.307763   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307955   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.308155   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308314   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308474   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.308629   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.308795   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.308811   79191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098619/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:03.418968   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:03.419010   79191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:03.419045   79191 buildroot.go:174] setting up certificates
	I0816 00:34:03.419058   79191 provision.go:84] configureAuth start
	I0816 00:34:03.419072   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.419338   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.421799   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422159   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.422198   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422401   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.425023   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425417   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.425445   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425557   79191 provision.go:143] copyHostCerts
	I0816 00:34:03.425624   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:03.425646   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:03.425717   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:03.425875   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:03.425888   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:03.425921   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:03.426007   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:03.426017   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:03.426045   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:03.426112   79191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098619 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-098619]
	I0816 00:34:03.509869   79191 provision.go:177] copyRemoteCerts
	I0816 00:34:03.509932   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:03.509961   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.512603   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.512938   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.512984   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.513163   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.513451   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.513617   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.513777   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:03.596330   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 00:34:03.621969   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:03.646778   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:03.671937   79191 provision.go:87] duration metric: took 252.867793ms to configureAuth
	I0816 00:34:03.671964   79191 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:03.672149   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:34:03.672250   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.675207   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675600   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.675625   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675787   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.676006   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676199   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676360   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.676549   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.676762   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.676779   79191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:03.945259   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:03.945287   79191 machine.go:96] duration metric: took 863.526642ms to provisionDockerMachine
	I0816 00:34:03.945298   79191 start.go:293] postStartSetup for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:34:03.945308   79191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:03.945335   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.945638   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:03.945666   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.948590   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.948967   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.948989   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.949152   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.949350   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.949491   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.949645   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.028994   79191 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:04.033776   79191 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:04.033799   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:04.033872   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:04.033943   79191 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:04.034033   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:04.045492   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:04.071879   79191 start.go:296] duration metric: took 126.569157ms for postStartSetup
	I0816 00:34:04.071920   79191 fix.go:56] duration metric: took 19.817260263s for fixHost
	I0816 00:34:04.071944   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.074942   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.075325   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075504   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.075699   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075846   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075977   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.076146   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:04.076319   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:04.076332   79191 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:04.178483   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768444.133390375
	
	I0816 00:34:04.178510   79191 fix.go:216] guest clock: 1723768444.133390375
	I0816 00:34:04.178519   79191 fix.go:229] Guest: 2024-08-16 00:34:04.133390375 +0000 UTC Remote: 2024-08-16 00:34:04.071925107 +0000 UTC m=+252.320651106 (delta=61.465268ms)
	I0816 00:34:04.178537   79191 fix.go:200] guest clock delta is within tolerance: 61.465268ms
	I0816 00:34:04.178541   79191 start.go:83] releasing machines lock for "old-k8s-version-098619", held for 19.923923778s
	I0816 00:34:04.178567   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.178875   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:04.181999   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182458   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.182490   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183192   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183357   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183412   79191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:04.183461   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.183553   79191 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:04.183575   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.186192   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186418   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186507   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186531   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186679   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.186811   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186836   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186850   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187016   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187032   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.187211   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187215   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.187364   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187488   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.283880   79191 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:04.289798   79191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:04.436822   79191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:04.443547   79191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:04.443631   79191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:04.464783   79191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:04.464807   79191 start.go:495] detecting cgroup driver to use...
	I0816 00:34:04.464873   79191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:04.481504   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:04.501871   79191 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:04.501942   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:04.521898   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:04.538186   79191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:04.704361   79191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:04.881682   79191 docker.go:233] disabling docker service ...
	I0816 00:34:04.881757   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:04.900264   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:04.916152   79191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:05.048440   79191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:05.166183   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:05.181888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:05.202525   79191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:34:05.202592   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.214655   79191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:05.214712   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.226052   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.236878   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.249217   79191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:05.260362   79191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:05.271039   79191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:05.271108   79191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:05.290423   79191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:05.307175   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:05.465815   79191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:05.640787   79191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:05.640878   79191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:05.646821   79191 start.go:563] Will wait 60s for crictl version
	I0816 00:34:05.646883   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:05.651455   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:05.698946   79191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:05.699037   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.729185   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.772063   79191 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:34:05.773406   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:05.776689   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777177   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:05.777241   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777435   79191 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:05.782377   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:05.797691   79191 kubeadm.go:883] updating cluster {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:05.797872   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:34:05.797953   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:05.861468   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:05.861557   79191 ssh_runner.go:195] Run: which lz4
	I0816 00:34:05.866880   79191 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:34:05.872036   79191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:34:05.872071   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:34:04.202120   78489 main.go:141] libmachine: (no-preload-819398) Calling .Start
	I0816 00:34:04.202293   78489 main.go:141] libmachine: (no-preload-819398) Ensuring networks are active...
	I0816 00:34:04.203062   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network default is active
	I0816 00:34:04.203345   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network mk-no-preload-819398 is active
	I0816 00:34:04.205286   78489 main.go:141] libmachine: (no-preload-819398) Getting domain xml...
	I0816 00:34:04.206025   78489 main.go:141] libmachine: (no-preload-819398) Creating domain...
	I0816 00:34:05.553661   78489 main.go:141] libmachine: (no-preload-819398) Waiting to get IP...
	I0816 00:34:05.554629   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.555210   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.555309   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.555211   80407 retry.go:31] will retry after 298.759084ms: waiting for machine to come up
	I0816 00:34:05.856046   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.856571   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.856604   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.856530   80407 retry.go:31] will retry after 293.278331ms: waiting for machine to come up
	I0816 00:34:06.151110   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.151542   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.151571   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.151498   80407 retry.go:31] will retry after 332.472371ms: waiting for machine to come up
	I0816 00:34:06.485927   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.486487   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.486514   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.486459   80407 retry.go:31] will retry after 600.720276ms: waiting for machine to come up
	I0816 00:34:05.926954   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.929140   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:06.972334   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:07.469652   78747 node_ready.go:49] node "default-k8s-diff-port-616827" has status "Ready":"True"
	I0816 00:34:07.469684   78747 node_ready.go:38] duration metric: took 7.004536271s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:07.469700   78747 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:07.476054   78747 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482839   78747 pod_ready.go:93] pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.482861   78747 pod_ready.go:82] duration metric: took 6.779315ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482871   78747 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489325   78747 pod_ready.go:93] pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.489348   78747 pod_ready.go:82] duration metric: took 6.470629ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489357   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495536   78747 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.495555   78747 pod_ready.go:82] duration metric: took 6.192295ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495565   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:09.503258   78747 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.631328   79191 crio.go:462] duration metric: took 1.76448771s to copy over tarball
	I0816 00:34:07.631413   79191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:34:10.662435   79191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.030990355s)
	I0816 00:34:10.662472   79191 crio.go:469] duration metric: took 3.031115615s to extract the tarball
	I0816 00:34:10.662482   79191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:34:10.707627   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:10.745704   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:10.745742   79191 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.745838   79191 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.745914   79191 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.745860   79191 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.745943   79191 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.745884   79191 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.746059   79191 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747781   79191 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.747803   79191 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.747808   79191 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.747824   79191 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.747842   79191 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.747883   79191 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.747895   79191 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747948   79191 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.916488   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.923947   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.931668   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.942764   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.948555   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.957593   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.970039   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:34:11.012673   79191 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:34:11.012707   79191 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.012778   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.026267   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:11.135366   79191 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:34:11.135398   79191 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.135451   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.149180   79191 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:34:11.149226   79191 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.149271   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183480   79191 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:34:11.183526   79191 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.183526   79191 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:34:11.183578   79191 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.183584   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183637   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186513   79191 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:34:11.186559   79191 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.186622   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186632   79191 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:34:11.186658   79191 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:34:11.186699   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186722   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.252857   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.252914   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.252935   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.253007   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.253012   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.253083   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.253140   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420527   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.420559   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.420564   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.420638   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420732   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.420791   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.420813   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591141   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.591197   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.591267   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.591337   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.591418   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591453   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.591505   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:34:11.721234   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:34:11.725967   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:34:11.731189   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:34:11.731276   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:34:11.742195   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:34:11.742224   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:34:11.742265   79191 cache_images.go:92] duration metric: took 996.507737ms to LoadCachedImages
	W0816 00:34:11.742327   79191 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0816 00:34:11.742342   79191 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0816 00:34:11.742464   79191 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-098619 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:11.742546   79191 ssh_runner.go:195] Run: crio config
	I0816 00:34:07.089462   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.090073   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.090099   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.089985   80407 retry.go:31] will retry after 666.260439ms: waiting for machine to come up
	I0816 00:34:07.757621   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.758156   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.758182   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.758105   80407 retry.go:31] will retry after 782.571604ms: waiting for machine to come up
	I0816 00:34:08.542021   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:08.542426   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:08.542475   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:08.542381   80407 retry.go:31] will retry after 840.347921ms: waiting for machine to come up
	I0816 00:34:09.384399   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:09.384866   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:09.384893   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:09.384824   80407 retry.go:31] will retry after 1.376690861s: waiting for machine to come up
	I0816 00:34:10.763158   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:10.763547   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:10.763573   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:10.763484   80407 retry.go:31] will retry after 1.237664711s: waiting for machine to come up
	I0816 00:34:10.426656   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:12.429312   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.354758   78747 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.354783   78747 pod_ready.go:82] duration metric: took 3.859210458s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.354796   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363323   78747 pod_ready.go:93] pod "kube-proxy-f99ds" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.363347   78747 pod_ready.go:82] duration metric: took 8.543406ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363359   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369799   78747 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.369826   78747 pod_ready.go:82] duration metric: took 6.458192ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369858   78747 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:13.376479   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.791749   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:34:11.791779   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:11.791791   79191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:11.791810   79191 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098619 NodeName:old-k8s-version-098619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:34:11.791969   79191 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-098619"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:11.792046   79191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:34:11.802572   79191 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:11.802649   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:11.812583   79191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 00:34:11.831551   79191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:11.852476   79191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 00:34:11.875116   79191 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:11.879833   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:11.893308   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:12.038989   79191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:12.061736   79191 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619 for IP: 192.168.72.137
	I0816 00:34:12.061761   79191 certs.go:194] generating shared ca certs ...
	I0816 00:34:12.061780   79191 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.061992   79191 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:12.062046   79191 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:12.062059   79191 certs.go:256] generating profile certs ...
	I0816 00:34:12.062193   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key
	I0816 00:34:12.062283   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4
	I0816 00:34:12.062343   79191 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key
	I0816 00:34:12.062485   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:12.062523   79191 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:12.062536   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:12.062579   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:12.062614   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:12.062658   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:12.062721   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:12.063630   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:12.106539   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:12.139393   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:12.171548   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:12.213113   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 00:34:12.244334   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:34:12.287340   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:12.331047   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:34:12.369666   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:12.397260   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:12.424009   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:12.450212   79191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:12.471550   79191 ssh_runner.go:195] Run: openssl version
	I0816 00:34:12.479821   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:12.494855   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500546   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500620   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.508817   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:12.521689   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:12.533904   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538789   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538946   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.546762   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:12.561940   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:12.575852   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582377   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582457   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.590772   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:12.604976   79191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:12.610332   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:12.617070   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:12.625769   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:12.634342   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:12.641486   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:12.650090   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:12.658206   79191 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:12.658306   79191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:12.658392   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.703323   79191 cri.go:89] found id: ""
	I0816 00:34:12.703399   79191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:12.714950   79191 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:12.714970   79191 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:12.715047   79191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:12.727051   79191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:12.728059   79191 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:12.728655   79191 kubeconfig.go:62] /home/jenkins/minikube-integration/19452-12919/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098619" cluster setting kubeconfig missing "old-k8s-version-098619" context setting]
	I0816 00:34:12.729552   79191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.731269   79191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:12.744732   79191 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0816 00:34:12.744766   79191 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:12.744777   79191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:12.744833   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.783356   79191 cri.go:89] found id: ""
	I0816 00:34:12.783432   79191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:12.801942   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:12.816412   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:12.816433   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:12.816480   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:12.827686   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:12.827757   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:12.838063   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:12.847714   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:12.847808   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:12.858274   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.869328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:12.869389   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.881457   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:12.892256   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:12.892325   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:12.902115   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:12.912484   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.040145   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.851639   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.085396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.208430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.321003   79191 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:14.321084   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:14.822130   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.321780   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.822121   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:16.322077   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:12.002977   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:12.003441   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:12.003470   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:12.003401   80407 retry.go:31] will retry after 1.413320186s: waiting for machine to come up
	I0816 00:34:13.418972   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:13.419346   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:13.419374   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:13.419284   80407 retry.go:31] will retry after 2.055525842s: waiting for machine to come up
	I0816 00:34:15.476550   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:15.477044   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:15.477072   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:15.477021   80407 retry.go:31] will retry after 2.728500649s: waiting for machine to come up
	I0816 00:34:14.926133   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.930322   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:15.377291   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:17.877627   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.821714   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.321166   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.821648   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.321711   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.821520   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.321732   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.821325   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.321783   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.821958   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:21.321139   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.208958   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:18.209350   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:18.209379   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:18.209302   80407 retry.go:31] will retry after 3.922749943s: waiting for machine to come up
	I0816 00:34:19.426265   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.926480   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.134804   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135230   78489 main.go:141] libmachine: (no-preload-819398) Found IP for machine: 192.168.61.15
	I0816 00:34:22.135266   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has current primary IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135292   78489 main.go:141] libmachine: (no-preload-819398) Reserving static IP address...
	I0816 00:34:22.135596   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.135629   78489 main.go:141] libmachine: (no-preload-819398) DBG | skip adding static IP to network mk-no-preload-819398 - found existing host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"}
	I0816 00:34:22.135644   78489 main.go:141] libmachine: (no-preload-819398) Reserved static IP address: 192.168.61.15
	I0816 00:34:22.135661   78489 main.go:141] libmachine: (no-preload-819398) Waiting for SSH to be available...
	I0816 00:34:22.135675   78489 main.go:141] libmachine: (no-preload-819398) DBG | Getting to WaitForSSH function...
	I0816 00:34:22.137639   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.137925   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.137956   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.138099   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH client type: external
	I0816 00:34:22.138141   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa (-rw-------)
	I0816 00:34:22.138198   78489 main.go:141] libmachine: (no-preload-819398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:22.138233   78489 main.go:141] libmachine: (no-preload-819398) DBG | About to run SSH command:
	I0816 00:34:22.138248   78489 main.go:141] libmachine: (no-preload-819398) DBG | exit 0
	I0816 00:34:22.262094   78489 main.go:141] libmachine: (no-preload-819398) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:22.262496   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetConfigRaw
	I0816 00:34:22.263081   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.265419   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.265746   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.265782   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.266097   78489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/config.json ...
	I0816 00:34:22.266283   78489 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:22.266301   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:22.266501   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.268848   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269269   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.269308   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269356   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.269537   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269684   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269803   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.269971   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.270185   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.270197   78489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:22.374848   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:22.374880   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375169   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:34:22.375195   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375407   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.378309   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378649   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.378678   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378853   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.379060   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379203   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379362   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.379568   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.379735   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.379749   78489 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-819398 && echo "no-preload-819398" | sudo tee /etc/hostname
	I0816 00:34:22.496438   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-819398
	
	I0816 00:34:22.496467   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.499101   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499411   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.499443   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499703   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.499912   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500116   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500247   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.500419   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.500624   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.500650   78489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-819398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-819398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-819398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:22.619769   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:22.619802   78489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:22.619826   78489 buildroot.go:174] setting up certificates
	I0816 00:34:22.619837   78489 provision.go:84] configureAuth start
	I0816 00:34:22.619847   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.620106   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.623130   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623485   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.623510   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623629   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.625964   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626308   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.626335   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626475   78489 provision.go:143] copyHostCerts
	I0816 00:34:22.626536   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:22.626557   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:22.626629   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:22.626756   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:22.626768   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:22.626798   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:22.626889   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:22.626899   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:22.626925   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:22.627008   78489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.no-preload-819398 san=[127.0.0.1 192.168.61.15 localhost minikube no-preload-819398]
	I0816 00:34:22.710036   78489 provision.go:177] copyRemoteCerts
	I0816 00:34:22.710093   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:22.710120   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.712944   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713380   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.713409   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713612   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.713780   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.713926   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.714082   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:22.800996   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:22.828264   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:34:22.855258   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:22.880981   78489 provision.go:87] duration metric: took 261.134406ms to configureAuth
	I0816 00:34:22.881013   78489 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:22.881176   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:22.881240   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.883962   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884348   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.884368   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884611   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.884828   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885052   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885248   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.885448   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.885639   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.885661   78489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:23.154764   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:23.154802   78489 machine.go:96] duration metric: took 888.504728ms to provisionDockerMachine
	I0816 00:34:23.154821   78489 start.go:293] postStartSetup for "no-preload-819398" (driver="kvm2")
	I0816 00:34:23.154837   78489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:23.154860   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.155176   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:23.155205   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.158105   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158482   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.158517   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158674   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.158864   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.159039   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.159198   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.241041   78489 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:23.245237   78489 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:23.245260   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:23.245324   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:23.245398   78489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:23.245480   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:23.254735   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:23.279620   78489 start.go:296] duration metric: took 124.783636ms for postStartSetup
	I0816 00:34:23.279668   78489 fix.go:56] duration metric: took 19.100951861s for fixHost
	I0816 00:34:23.279693   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.282497   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.282959   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.282981   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.283184   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.283376   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283514   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283687   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.283870   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:23.284027   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:23.284037   78489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:23.390632   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768463.360038650
	
	I0816 00:34:23.390658   78489 fix.go:216] guest clock: 1723768463.360038650
	I0816 00:34:23.390668   78489 fix.go:229] Guest: 2024-08-16 00:34:23.36003865 +0000 UTC Remote: 2024-08-16 00:34:23.27967333 +0000 UTC m=+356.445975156 (delta=80.36532ms)
	I0816 00:34:23.390697   78489 fix.go:200] guest clock delta is within tolerance: 80.36532ms
	I0816 00:34:23.390710   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 19.212026147s
	I0816 00:34:23.390729   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.390977   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:23.393728   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394050   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.394071   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394255   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394722   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394895   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394977   78489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:23.395028   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.395135   78489 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:23.395151   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.397773   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.397939   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398196   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398237   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398354   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398480   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398507   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398515   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398717   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.398722   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398887   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398884   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.399029   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.399164   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.497983   78489 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:23.503896   78489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:23.660357   78489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:23.666714   78489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:23.666775   78489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:23.684565   78489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:23.684586   78489 start.go:495] detecting cgroup driver to use...
	I0816 00:34:23.684655   78489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:23.701981   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:23.715786   78489 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:23.715852   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:23.733513   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:23.748705   78489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:23.866341   78489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:24.016845   78489 docker.go:233] disabling docker service ...
	I0816 00:34:24.016918   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:24.032673   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:24.046465   78489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:24.184862   78489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:24.309066   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:24.323818   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:24.344352   78489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:34:24.344422   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.355015   78489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:24.355093   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.365665   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.377238   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.388619   78489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:24.399306   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.410087   78489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.428465   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.439026   78489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:24.448856   78489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:24.448943   78489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:24.463002   78489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:24.473030   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:24.587542   78489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:24.719072   78489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:24.719159   78489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:24.723789   78489 start.go:563] Will wait 60s for crictl version
	I0816 00:34:24.723842   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:24.727616   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:24.766517   78489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:24.766600   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.795204   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.824529   78489 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:34:20.376278   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.376510   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:24.876314   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.822114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.321350   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.821541   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.322014   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.821938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.321883   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.821178   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.321881   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.821199   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:26.321573   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.825725   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:24.828458   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829018   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:24.829045   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829336   78489 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:24.833711   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:24.847017   78489 kubeadm.go:883] updating cluster {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:24.847136   78489 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:34:24.847171   78489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:24.883489   78489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:34:24.883515   78489 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:24.883592   78489 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.883612   78489 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.883664   78489 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:24.883690   78489 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.883719   78489 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.883595   78489 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.883927   78489 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.884016   78489 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885061   78489 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.885185   78489 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885207   78489 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.885204   78489 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.885225   78489 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.042311   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.042317   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.048181   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.050502   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.059137   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.091688   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 00:34:25.096653   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.126261   78489 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 00:34:25.126311   78489 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.126368   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.164673   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.189972   78489 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 00:34:25.190014   78489 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.190051   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249632   78489 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 00:34:25.249674   78489 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.249717   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249780   78489 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 00:34:25.249824   78489 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.249884   78489 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 00:34:25.249910   78489 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.249887   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249942   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360038   78489 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 00:34:25.360082   78489 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.360121   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360133   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.360191   78489 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 00:34:25.360208   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.360221   78489 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.360256   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360283   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.360326   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.360337   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.462610   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.462691   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.480037   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.480114   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.480176   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.480211   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.489343   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.642853   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.642913   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.642963   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.645719   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.645749   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.645833   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.645899   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.802574   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 00:34:25.802645   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.802687   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.802728   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.808235   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 00:34:25.808330   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 00:34:25.808387   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 00:34:25.808401   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 00:34:25.808432   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:25.808334   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:25.808471   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:25.808480   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:25.816510   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 00:34:25.816527   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.816560   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.885445   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 00:34:25.885532   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 00:34:25.885549   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:25.885588   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 00:34:25.885600   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:25.885674   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 00:34:25.885690   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 00:34:25.885711   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 00:34:24.426102   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.927534   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.877013   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:29.378108   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.821489   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.322094   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.321201   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.821854   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.321188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.821729   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.321316   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.821998   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:31.322184   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.938767   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.122182459s)
	I0816 00:34:27.938804   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 00:34:27.938801   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.05323098s)
	I0816 00:34:27.938826   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.05321158s)
	I0816 00:34:27.938831   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 00:34:27.938833   78489 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:27.938843   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 00:34:27.938906   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:31.645449   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.706515577s)
	I0816 00:34:31.645486   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 00:34:31.645514   78489 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:31.645563   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:29.427463   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.927253   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.875608   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:33.876822   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.821361   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.321205   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.822088   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.322126   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.821956   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.321921   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.821245   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.822034   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:36.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.625714   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.980118908s)
	I0816 00:34:33.625749   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 00:34:33.625773   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:33.625824   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:35.680134   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054281396s)
	I0816 00:34:35.680167   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 00:34:35.680209   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:35.680276   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:34.426416   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.427589   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:38.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:35.877327   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:37.877385   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.821567   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.321329   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.822169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.321832   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.821404   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.321406   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.821914   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.322169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.821149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:41.322125   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.430152   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.749849436s)
	I0816 00:34:37.430180   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 00:34:37.430208   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:37.430254   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:39.684335   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.254047221s)
	I0816 00:34:39.684365   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 00:34:39.684391   78489 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:39.684445   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:40.328672   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 00:34:40.328722   78489 cache_images.go:123] Successfully loaded all cached images
	I0816 00:34:40.328729   78489 cache_images.go:92] duration metric: took 15.445200533s to LoadCachedImages
	I0816 00:34:40.328743   78489 kubeadm.go:934] updating node { 192.168.61.15 8443 v1.31.0 crio true true} ...
	I0816 00:34:40.328897   78489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-819398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:40.328994   78489 ssh_runner.go:195] Run: crio config
	I0816 00:34:40.383655   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:40.383675   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:40.383685   78489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:40.383712   78489 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.15 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-819398 NodeName:no-preload-819398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:34:40.383855   78489 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-819398"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:40.383930   78489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:34:40.395384   78489 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:40.395457   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:40.405037   78489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 00:34:40.423278   78489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:40.440963   78489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 00:34:40.458845   78489 ssh_runner.go:195] Run: grep 192.168.61.15	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:40.462574   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:40.475524   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:40.614624   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:40.632229   78489 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398 for IP: 192.168.61.15
	I0816 00:34:40.632252   78489 certs.go:194] generating shared ca certs ...
	I0816 00:34:40.632267   78489 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:40.632430   78489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:40.632483   78489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:40.632497   78489 certs.go:256] generating profile certs ...
	I0816 00:34:40.632598   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/client.key
	I0816 00:34:40.632679   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key.a9de72ef
	I0816 00:34:40.632759   78489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key
	I0816 00:34:40.632919   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:40.632962   78489 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:40.632978   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:40.633011   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:40.633042   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:40.633068   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:40.633124   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:40.633963   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:40.676094   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:40.707032   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:40.740455   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:40.778080   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 00:34:40.809950   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:34:40.841459   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:40.866708   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:34:40.893568   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:40.917144   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:40.942349   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:40.966731   78489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:40.984268   78489 ssh_runner.go:195] Run: openssl version
	I0816 00:34:40.990614   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:41.002909   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007595   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007645   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.013618   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:41.024886   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:41.036350   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040801   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040845   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.046554   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:41.057707   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:41.069566   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074107   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074159   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.080113   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:41.091854   78489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:41.096543   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:41.102883   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:41.109228   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:41.115622   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:41.121895   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:41.128016   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:41.134126   78489 kubeadm.go:392] StartCluster: {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:41.134230   78489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:41.134310   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.178898   78489 cri.go:89] found id: ""
	I0816 00:34:41.178972   78489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:41.190167   78489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:41.190184   78489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:41.190223   78489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:41.200385   78489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:41.201824   78489 kubeconfig.go:125] found "no-preload-819398" server: "https://192.168.61.15:8443"
	I0816 00:34:41.204812   78489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:41.225215   78489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.15
	I0816 00:34:41.225252   78489 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:41.225265   78489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:41.225323   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.269288   78489 cri.go:89] found id: ""
	I0816 00:34:41.269377   78489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:41.286238   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:41.297713   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:41.297732   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:41.297782   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:41.308635   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:41.308695   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:41.320045   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:41.329866   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:41.329952   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:41.341488   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.351018   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:41.351083   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.360845   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:41.370730   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:41.370808   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:41.382572   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:41.392544   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.515558   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.425671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:43.426507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:40.377638   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:42.877395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:41.821459   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.321938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.822038   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.321447   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.821571   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.321428   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.821496   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:46.322149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.610068   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.094473643s)
	I0816 00:34:42.610106   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.850562   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.916519   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:43.042025   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:43.042117   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.543065   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.043098   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.061154   78489 api_server.go:72] duration metric: took 1.019134992s to wait for apiserver process to appear ...
	I0816 00:34:44.061180   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:34:44.061199   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.718683   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.718717   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:46.718730   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.785528   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.785559   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:47.061692   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.066556   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.066590   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:47.562057   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.569664   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.569699   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:48.061258   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:48.065926   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:34:48.073136   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:34:48.073165   78489 api_server.go:131] duration metric: took 4.011977616s to wait for apiserver health ...
	I0816 00:34:48.073179   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:48.073189   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:48.075105   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:34:45.925817   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.925984   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:45.376424   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.377794   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.876764   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:46.822140   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.321575   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.321365   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.822009   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.321536   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.821189   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.321387   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.821982   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:51.322075   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.076340   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:34:48.113148   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:34:48.152316   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:34:48.166108   78489 system_pods.go:59] 8 kube-system pods found
	I0816 00:34:48.166142   78489 system_pods.go:61] "coredns-6f6b679f8f-sv454" [5ba1d55f-4455-4ad1-b3c8-7671ce481dd2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:34:48.166154   78489 system_pods.go:61] "etcd-no-preload-819398" [b5e55df3-fb20-4980-928f-31217bf25351] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:34:48.166164   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [7670f41c-8439-4782-a3c8-077a144d2998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:34:48.166175   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [61a6080a-5e65-4400-b230-0703f347fc17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:34:48.166182   78489 system_pods.go:61] "kube-proxy-xdm7w" [9d0517c5-8cf7-47a0-86d0-c674677e9f46] Running
	I0816 00:34:48.166191   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [af346e37-312a-4225-b3bf-0ddda71022dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:34:48.166204   78489 system_pods.go:61] "metrics-server-6867b74b74-mm5l7" [2ebc3f9f-e1a7-47b6-849e-6a4995d13206] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:34:48.166214   78489 system_pods.go:61] "storage-provisioner" [745bbfbd-aedb-4e68-946e-5a7ead1d5b48] Running
	I0816 00:34:48.166223   78489 system_pods.go:74] duration metric: took 13.883212ms to wait for pod list to return data ...
	I0816 00:34:48.166235   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:34:48.170444   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:34:48.170478   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:34:48.170492   78489 node_conditions.go:105] duration metric: took 4.251703ms to run NodePressure ...
	I0816 00:34:48.170520   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:48.437519   78489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:34:48.441992   78489 kubeadm.go:739] kubelet initialised
	I0816 00:34:48.442015   78489 kubeadm.go:740] duration metric: took 4.465986ms waiting for restarted kubelet to initialise ...
	I0816 00:34:48.442025   78489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:48.447127   78489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:50.453956   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.926184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.926515   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.876909   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.376236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.822066   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.321534   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.821154   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.321256   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.821510   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.321984   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.821175   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.321601   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:56.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.454122   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.954716   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.426224   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.926472   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.376394   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:58.876502   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.821891   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.321266   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.821346   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.321718   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.821304   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.821302   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.821563   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:01.321323   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.453951   78489 pod_ready.go:93] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:57.453974   78489 pod_ready.go:82] duration metric: took 9.00682228s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:57.453983   78489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.460582   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.961243   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:00.961269   78489 pod_ready.go:82] duration metric: took 3.507278873s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:00.961279   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468020   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:01.468047   78489 pod_ready.go:82] duration metric: took 506.758881ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468060   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.425956   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.925967   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.876678   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:03.376662   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.821317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.321560   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.821707   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.322110   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.821327   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.321430   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.821935   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.321559   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.821373   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.975498   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.975522   78489 pod_ready.go:82] duration metric: took 1.50745395s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.975531   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980290   78489 pod_ready.go:93] pod "kube-proxy-xdm7w" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.980316   78489 pod_ready.go:82] duration metric: took 4.778704ms for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980328   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988237   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.988260   78489 pod_ready.go:82] duration metric: took 7.924207ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988268   78489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:04.993992   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:04.426419   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.426648   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.927578   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:05.877102   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:07.877187   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.821405   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.321781   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.821420   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.321483   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.821347   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.321167   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.821188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.821179   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:11.322114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.994539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.995530   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.494248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.425605   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:13.426338   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:10.378729   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:12.875673   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:14.876717   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.822105   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.321963   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.822172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.321805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.821971   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:14.321784   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:14.321882   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:14.360939   79191 cri.go:89] found id: ""
	I0816 00:35:14.360962   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.360971   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:14.360976   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:14.361028   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:14.397796   79191 cri.go:89] found id: ""
	I0816 00:35:14.397824   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.397836   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:14.397858   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:14.397922   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:14.433924   79191 cri.go:89] found id: ""
	I0816 00:35:14.433950   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.433960   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:14.433968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:14.434024   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:14.468657   79191 cri.go:89] found id: ""
	I0816 00:35:14.468685   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.468696   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:14.468704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:14.468770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:14.505221   79191 cri.go:89] found id: ""
	I0816 00:35:14.505247   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.505256   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:14.505264   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:14.505323   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:14.546032   79191 cri.go:89] found id: ""
	I0816 00:35:14.546062   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.546072   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:14.546079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:14.546147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:14.581260   79191 cri.go:89] found id: ""
	I0816 00:35:14.581284   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.581292   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:14.581298   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:14.581352   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:14.616103   79191 cri.go:89] found id: ""
	I0816 00:35:14.616127   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.616134   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:14.616142   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:14.616153   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:14.690062   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:14.690106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:14.735662   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:14.735699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:14.786049   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:14.786086   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:14.800375   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:14.800405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:14.931822   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:13.494676   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.497759   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.925671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.926279   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.375842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.376005   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.432686   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:17.448728   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:17.448806   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:17.496384   79191 cri.go:89] found id: ""
	I0816 00:35:17.496523   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.496568   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:17.496581   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:17.496646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:17.560779   79191 cri.go:89] found id: ""
	I0816 00:35:17.560810   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.560820   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:17.560829   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:17.560891   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:17.606007   79191 cri.go:89] found id: ""
	I0816 00:35:17.606036   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.606047   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:17.606054   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:17.606123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:17.639910   79191 cri.go:89] found id: ""
	I0816 00:35:17.639937   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.639945   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:17.639951   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:17.640030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:17.676534   79191 cri.go:89] found id: ""
	I0816 00:35:17.676563   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.676573   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:17.676581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:17.676645   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:17.716233   79191 cri.go:89] found id: ""
	I0816 00:35:17.716255   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.716262   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:17.716268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:17.716334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:17.753648   79191 cri.go:89] found id: ""
	I0816 00:35:17.753686   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.753696   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:17.753704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:17.753763   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:17.791670   79191 cri.go:89] found id: ""
	I0816 00:35:17.791694   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.791702   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:17.791711   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:17.791722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:17.840616   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:17.840650   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:17.854949   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:17.854981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:17.933699   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:17.933724   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:17.933750   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:18.010177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:18.010211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:20.551384   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:20.564463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:20.564540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:20.604361   79191 cri.go:89] found id: ""
	I0816 00:35:20.604389   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.604399   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:20.604405   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:20.604453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:20.639502   79191 cri.go:89] found id: ""
	I0816 00:35:20.639528   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.639535   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:20.639541   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:20.639590   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:20.676430   79191 cri.go:89] found id: ""
	I0816 00:35:20.676476   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.676484   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:20.676496   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:20.676551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:20.711213   79191 cri.go:89] found id: ""
	I0816 00:35:20.711243   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.711253   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:20.711261   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:20.711320   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:20.745533   79191 cri.go:89] found id: ""
	I0816 00:35:20.745563   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.745574   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:20.745581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:20.745644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:20.781031   79191 cri.go:89] found id: ""
	I0816 00:35:20.781056   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.781064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:20.781071   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:20.781119   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:20.819966   79191 cri.go:89] found id: ""
	I0816 00:35:20.819994   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.820005   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:20.820012   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:20.820096   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:20.859011   79191 cri.go:89] found id: ""
	I0816 00:35:20.859041   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.859052   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:20.859063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:20.859078   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:20.909479   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:20.909513   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:20.925627   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:20.925653   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:21.001707   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:21.001733   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:21.001747   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:21.085853   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:21.085893   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:17.994492   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:20.496255   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:22.426663   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:21.878587   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.377462   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:23.626499   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:23.640337   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:23.640395   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:23.679422   79191 cri.go:89] found id: ""
	I0816 00:35:23.679449   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.679457   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:23.679463   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:23.679522   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:23.716571   79191 cri.go:89] found id: ""
	I0816 00:35:23.716594   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.716601   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:23.716607   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:23.716660   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:23.752539   79191 cri.go:89] found id: ""
	I0816 00:35:23.752563   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.752573   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:23.752581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:23.752640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:23.790665   79191 cri.go:89] found id: ""
	I0816 00:35:23.790693   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.790700   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:23.790707   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:23.790757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:23.827695   79191 cri.go:89] found id: ""
	I0816 00:35:23.827719   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.827727   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:23.827733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:23.827792   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:23.867664   79191 cri.go:89] found id: ""
	I0816 00:35:23.867687   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.867695   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:23.867701   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:23.867776   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:23.907844   79191 cri.go:89] found id: ""
	I0816 00:35:23.907871   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.907882   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:23.907890   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:23.907951   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:23.945372   79191 cri.go:89] found id: ""
	I0816 00:35:23.945403   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.945414   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:23.945424   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:23.945438   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:23.998270   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:23.998302   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:24.012794   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:24.012824   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:24.087285   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:24.087308   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:24.087340   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:24.167151   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:24.167184   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:26.710285   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:26.724394   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:26.724453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:26.764667   79191 cri.go:89] found id: ""
	I0816 00:35:26.764690   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.764698   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:26.764704   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:26.764756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:22.994036   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.995035   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.927042   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:27.426054   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.877007   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.376563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.806631   79191 cri.go:89] found id: ""
	I0816 00:35:26.806660   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.806670   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:26.806677   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:26.806741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:26.843434   79191 cri.go:89] found id: ""
	I0816 00:35:26.843473   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.843485   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:26.843493   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:26.843576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:26.882521   79191 cri.go:89] found id: ""
	I0816 00:35:26.882556   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.882566   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:26.882574   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:26.882635   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:26.917956   79191 cri.go:89] found id: ""
	I0816 00:35:26.917985   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.917995   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:26.918004   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:26.918056   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:26.953168   79191 cri.go:89] found id: ""
	I0816 00:35:26.953191   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.953199   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:26.953205   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:26.953251   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:26.991366   79191 cri.go:89] found id: ""
	I0816 00:35:26.991397   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.991408   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:26.991416   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:26.991479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:27.028591   79191 cri.go:89] found id: ""
	I0816 00:35:27.028619   79191 logs.go:276] 0 containers: []
	W0816 00:35:27.028626   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:27.028635   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:27.028647   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:27.111613   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:27.111645   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:27.153539   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:27.153575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:27.209377   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:27.209420   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:27.223316   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:27.223343   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:27.301411   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:29.801803   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:29.815545   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:29.815626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:29.853638   79191 cri.go:89] found id: ""
	I0816 00:35:29.853668   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.853678   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:29.853687   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:29.853756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:29.892532   79191 cri.go:89] found id: ""
	I0816 00:35:29.892554   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.892561   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:29.892567   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:29.892622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:29.932486   79191 cri.go:89] found id: ""
	I0816 00:35:29.932511   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.932519   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:29.932524   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:29.932580   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:29.973161   79191 cri.go:89] found id: ""
	I0816 00:35:29.973194   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.973205   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:29.973213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:29.973275   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:30.009606   79191 cri.go:89] found id: ""
	I0816 00:35:30.009629   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.009637   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:30.009643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:30.009691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:30.045016   79191 cri.go:89] found id: ""
	I0816 00:35:30.045043   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.045050   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:30.045057   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:30.045113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:30.079934   79191 cri.go:89] found id: ""
	I0816 00:35:30.079959   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.079968   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:30.079974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:30.080030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:30.114173   79191 cri.go:89] found id: ""
	I0816 00:35:30.114199   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.114207   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:30.114216   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:30.114227   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:30.154765   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:30.154791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:30.204410   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:30.204442   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:30.218909   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:30.218934   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:30.294141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:30.294161   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:30.294193   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:26.995394   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.494569   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.426234   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.926349   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.926433   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.376976   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.377869   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:32.872216   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:32.886211   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:32.886289   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:32.929416   79191 cri.go:89] found id: ""
	I0816 00:35:32.929440   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.929449   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:32.929456   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:32.929520   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:32.977862   79191 cri.go:89] found id: ""
	I0816 00:35:32.977887   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.977896   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:32.977920   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:32.977978   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:33.015569   79191 cri.go:89] found id: ""
	I0816 00:35:33.015593   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.015603   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:33.015622   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:33.015681   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:33.050900   79191 cri.go:89] found id: ""
	I0816 00:35:33.050934   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.050943   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:33.050959   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:33.051033   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:33.084529   79191 cri.go:89] found id: ""
	I0816 00:35:33.084556   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.084564   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:33.084569   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:33.084619   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:33.119819   79191 cri.go:89] found id: ""
	I0816 00:35:33.119845   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.119855   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:33.119863   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:33.119928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:33.159922   79191 cri.go:89] found id: ""
	I0816 00:35:33.159952   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.159959   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:33.159965   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:33.160023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:33.194977   79191 cri.go:89] found id: ""
	I0816 00:35:33.195006   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.195018   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:33.195030   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:33.195044   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:33.208578   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:33.208623   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:33.282177   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:33.282198   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:33.282211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:33.365514   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:33.365552   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:33.405190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:33.405226   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:35.959033   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:35.971866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:35.971934   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:36.008442   79191 cri.go:89] found id: ""
	I0816 00:35:36.008473   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.008483   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:36.008489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:36.008547   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:36.044346   79191 cri.go:89] found id: ""
	I0816 00:35:36.044374   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.044386   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:36.044393   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:36.044444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:36.083078   79191 cri.go:89] found id: ""
	I0816 00:35:36.083104   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.083112   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:36.083118   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:36.083166   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:36.120195   79191 cri.go:89] found id: ""
	I0816 00:35:36.120218   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.120226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:36.120232   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:36.120288   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:36.156186   79191 cri.go:89] found id: ""
	I0816 00:35:36.156215   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.156225   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:36.156233   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:36.156295   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:36.195585   79191 cri.go:89] found id: ""
	I0816 00:35:36.195613   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.195623   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:36.195631   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:36.195699   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:36.231110   79191 cri.go:89] found id: ""
	I0816 00:35:36.231133   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.231141   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:36.231147   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:36.231210   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:36.268745   79191 cri.go:89] found id: ""
	I0816 00:35:36.268770   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.268778   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:36.268786   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:36.268800   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:36.282225   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:36.282251   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:36.351401   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:36.351431   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:36.351447   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:36.429970   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:36.430003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:36.473745   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:36.473776   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:31.994163   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.994256   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.995188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:36.427247   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.926123   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.877303   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:39.027444   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:39.041107   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:39.041170   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:39.079807   79191 cri.go:89] found id: ""
	I0816 00:35:39.079830   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.079837   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:39.079843   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:39.079890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:39.115532   79191 cri.go:89] found id: ""
	I0816 00:35:39.115559   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.115569   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:39.115576   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:39.115623   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:39.150197   79191 cri.go:89] found id: ""
	I0816 00:35:39.150222   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.150233   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:39.150241   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:39.150300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:39.186480   79191 cri.go:89] found id: ""
	I0816 00:35:39.186507   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.186515   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:39.186521   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:39.186572   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:39.221576   79191 cri.go:89] found id: ""
	I0816 00:35:39.221605   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.221615   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:39.221620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:39.221669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:39.259846   79191 cri.go:89] found id: ""
	I0816 00:35:39.259877   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.259888   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:39.259896   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:39.259950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:39.294866   79191 cri.go:89] found id: ""
	I0816 00:35:39.294891   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.294898   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:39.294903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:39.294952   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:39.329546   79191 cri.go:89] found id: ""
	I0816 00:35:39.329576   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.329584   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:39.329593   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:39.329604   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:39.371579   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:39.371609   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:39.422903   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:39.422935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:39.437673   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:39.437699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:39.515146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:39.515171   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:39.515185   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:38.495377   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.495856   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.926444   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:43.426438   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.376648   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.877521   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.101733   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:42.115563   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:42.115640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:42.155187   79191 cri.go:89] found id: ""
	I0816 00:35:42.155216   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.155224   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:42.155230   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:42.155282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:42.194414   79191 cri.go:89] found id: ""
	I0816 00:35:42.194444   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.194456   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:42.194464   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:42.194523   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:42.234219   79191 cri.go:89] found id: ""
	I0816 00:35:42.234245   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.234253   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:42.234259   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:42.234314   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:42.272278   79191 cri.go:89] found id: ""
	I0816 00:35:42.272304   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.272314   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:42.272322   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:42.272381   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:42.309973   79191 cri.go:89] found id: ""
	I0816 00:35:42.309999   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.310007   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:42.310013   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:42.310066   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:42.350745   79191 cri.go:89] found id: ""
	I0816 00:35:42.350773   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.350782   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:42.350790   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:42.350853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:42.387775   79191 cri.go:89] found id: ""
	I0816 00:35:42.387803   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.387813   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:42.387832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:42.387902   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:42.425086   79191 cri.go:89] found id: ""
	I0816 00:35:42.425110   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.425118   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:42.425125   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:42.425138   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:42.515543   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:42.515575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:42.558348   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:42.558372   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:42.613026   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:42.613059   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.628907   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:42.628932   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:42.710265   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.211083   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:45.225001   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:45.225083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:45.258193   79191 cri.go:89] found id: ""
	I0816 00:35:45.258223   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.258232   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:45.258240   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:45.258297   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:45.294255   79191 cri.go:89] found id: ""
	I0816 00:35:45.294278   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.294286   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:45.294291   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:45.294335   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:45.329827   79191 cri.go:89] found id: ""
	I0816 00:35:45.329875   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.329886   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:45.329894   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:45.329944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:45.366095   79191 cri.go:89] found id: ""
	I0816 00:35:45.366124   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.366134   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:45.366141   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:45.366202   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:45.402367   79191 cri.go:89] found id: ""
	I0816 00:35:45.402390   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.402398   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:45.402403   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:45.402449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:45.439272   79191 cri.go:89] found id: ""
	I0816 00:35:45.439293   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.439300   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:45.439310   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:45.439358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:45.474351   79191 cri.go:89] found id: ""
	I0816 00:35:45.474380   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.474388   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:45.474393   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:45.474445   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:45.519636   79191 cri.go:89] found id: ""
	I0816 00:35:45.519661   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.519671   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:45.519680   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:45.519695   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:45.593425   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.593446   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:45.593458   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:45.668058   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:45.668095   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:45.716090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:45.716125   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:45.774177   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:45.774207   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.495914   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:44.996641   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.426740   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.925719   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.376025   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.376628   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.876035   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:48.288893   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:48.302256   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:48.302321   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:48.337001   79191 cri.go:89] found id: ""
	I0816 00:35:48.337030   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.337041   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:48.337048   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:48.337110   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:48.378341   79191 cri.go:89] found id: ""
	I0816 00:35:48.378367   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.378375   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:48.378384   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:48.378447   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:48.414304   79191 cri.go:89] found id: ""
	I0816 00:35:48.414383   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.414402   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:48.414410   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:48.414473   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:48.453946   79191 cri.go:89] found id: ""
	I0816 00:35:48.453969   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.453976   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:48.453982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:48.454036   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:48.489597   79191 cri.go:89] found id: ""
	I0816 00:35:48.489617   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.489623   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:48.489629   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:48.489672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:48.524195   79191 cri.go:89] found id: ""
	I0816 00:35:48.524222   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.524232   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:48.524239   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:48.524293   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:48.567854   79191 cri.go:89] found id: ""
	I0816 00:35:48.567880   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.567890   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:48.567897   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:48.567956   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:48.603494   79191 cri.go:89] found id: ""
	I0816 00:35:48.603520   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.603530   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:48.603540   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:48.603556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:48.642927   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:48.642960   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:48.693761   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:48.693791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:48.708790   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:48.708818   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:48.780072   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:48.780092   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:48.780106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.362108   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:51.376113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:51.376185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:51.413988   79191 cri.go:89] found id: ""
	I0816 00:35:51.414022   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.414033   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:51.414041   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:51.414101   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:51.460901   79191 cri.go:89] found id: ""
	I0816 00:35:51.460937   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.460948   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:51.460956   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:51.461019   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:51.497178   79191 cri.go:89] found id: ""
	I0816 00:35:51.497205   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.497215   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:51.497223   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:51.497365   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:51.534559   79191 cri.go:89] found id: ""
	I0816 00:35:51.534589   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.534600   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:51.534607   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:51.534668   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:51.570258   79191 cri.go:89] found id: ""
	I0816 00:35:51.570280   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.570287   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:51.570293   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:51.570356   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:51.609639   79191 cri.go:89] found id: ""
	I0816 00:35:51.609665   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.609675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:51.609683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:51.609742   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:51.645629   79191 cri.go:89] found id: ""
	I0816 00:35:51.645652   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.645659   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:51.645664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:51.645731   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:51.683325   79191 cri.go:89] found id: ""
	I0816 00:35:51.683344   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.683351   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:51.683358   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:51.683369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:51.739101   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:51.739133   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:51.753436   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:51.753466   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:35:47.494904   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.495416   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.926975   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:51.928318   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:52.376854   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.880623   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:35:51.831242   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:51.831268   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:51.831294   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.926924   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:51.926970   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:54.472667   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:54.486706   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:54.486785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:54.524180   79191 cri.go:89] found id: ""
	I0816 00:35:54.524203   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.524211   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:54.524216   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:54.524273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:54.563758   79191 cri.go:89] found id: ""
	I0816 00:35:54.563781   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.563788   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:54.563795   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:54.563859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:54.599442   79191 cri.go:89] found id: ""
	I0816 00:35:54.599471   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.599481   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:54.599488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:54.599553   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:54.633521   79191 cri.go:89] found id: ""
	I0816 00:35:54.633547   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.633558   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:54.633565   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:54.633628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:54.670036   79191 cri.go:89] found id: ""
	I0816 00:35:54.670064   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.670075   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:54.670083   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:54.670148   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:54.707565   79191 cri.go:89] found id: ""
	I0816 00:35:54.707587   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.707594   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:54.707600   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:54.707659   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:54.744500   79191 cri.go:89] found id: ""
	I0816 00:35:54.744530   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.744541   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:54.744548   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:54.744612   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:54.778964   79191 cri.go:89] found id: ""
	I0816 00:35:54.778988   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.778995   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:54.779007   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:54.779020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:54.831806   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:54.831838   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:54.845954   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:54.845979   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:54.921817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:54.921855   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:54.921871   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:55.006401   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:55.006439   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:51.996591   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.495673   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.427044   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:56.927184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.376333   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.548661   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:57.562489   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:57.562549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:57.597855   79191 cri.go:89] found id: ""
	I0816 00:35:57.597881   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.597891   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:57.597899   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:57.597961   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:57.634085   79191 cri.go:89] found id: ""
	I0816 00:35:57.634114   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.634126   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:57.634133   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:57.634193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:57.671748   79191 cri.go:89] found id: ""
	I0816 00:35:57.671779   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.671788   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:57.671795   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:57.671859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:57.708836   79191 cri.go:89] found id: ""
	I0816 00:35:57.708862   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.708870   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:57.708877   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:57.708940   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:57.744601   79191 cri.go:89] found id: ""
	I0816 00:35:57.744630   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.744639   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:57.744645   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:57.744706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:57.781888   79191 cri.go:89] found id: ""
	I0816 00:35:57.781919   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.781929   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:57.781937   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:57.781997   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:57.822612   79191 cri.go:89] found id: ""
	I0816 00:35:57.822634   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.822641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:57.822647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:57.822706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:57.873968   79191 cri.go:89] found id: ""
	I0816 00:35:57.873998   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.874008   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:57.874019   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:57.874037   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:57.896611   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:57.896643   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:57.995575   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:57.995597   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:57.995612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:58.077196   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:58.077230   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:58.116956   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:58.116985   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:00.664805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:00.678425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:00.678501   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:00.715522   79191 cri.go:89] found id: ""
	I0816 00:36:00.715548   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.715557   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:00.715562   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:00.715608   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:00.749892   79191 cri.go:89] found id: ""
	I0816 00:36:00.749920   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.749931   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:00.749938   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:00.750006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:00.787302   79191 cri.go:89] found id: ""
	I0816 00:36:00.787325   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.787332   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:00.787338   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:00.787392   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:00.821866   79191 cri.go:89] found id: ""
	I0816 00:36:00.821894   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.821906   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:00.821914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:00.821971   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:00.856346   79191 cri.go:89] found id: ""
	I0816 00:36:00.856369   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.856377   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:00.856382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:00.856431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:00.893569   79191 cri.go:89] found id: ""
	I0816 00:36:00.893596   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.893606   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:00.893614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:00.893677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:00.930342   79191 cri.go:89] found id: ""
	I0816 00:36:00.930367   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.930378   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:00.930386   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:00.930622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:00.966039   79191 cri.go:89] found id: ""
	I0816 00:36:00.966071   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.966085   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:00.966095   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:00.966109   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:01.045594   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:01.045631   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:01.089555   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:01.089586   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:01.141597   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:01.141633   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:01.156260   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:01.156286   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:01.230573   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:56.995077   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:58.995897   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.495116   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.426099   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.926011   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.927327   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.376842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.875993   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.730825   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:03.744766   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:03.744838   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:03.781095   79191 cri.go:89] found id: ""
	I0816 00:36:03.781124   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.781142   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:03.781150   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:03.781215   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:03.815637   79191 cri.go:89] found id: ""
	I0816 00:36:03.815669   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.815680   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:03.815687   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:03.815741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:03.850076   79191 cri.go:89] found id: ""
	I0816 00:36:03.850110   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.850122   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:03.850130   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:03.850185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:03.888840   79191 cri.go:89] found id: ""
	I0816 00:36:03.888863   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.888872   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:03.888879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:03.888941   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:03.928317   79191 cri.go:89] found id: ""
	I0816 00:36:03.928341   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.928350   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:03.928359   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:03.928413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:03.964709   79191 cri.go:89] found id: ""
	I0816 00:36:03.964741   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.964751   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:03.964760   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:03.964830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:03.999877   79191 cri.go:89] found id: ""
	I0816 00:36:03.999902   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.999912   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:03.999919   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:03.999981   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:04.036772   79191 cri.go:89] found id: ""
	I0816 00:36:04.036799   79191 logs.go:276] 0 containers: []
	W0816 00:36:04.036810   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:04.036820   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:04.036833   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:04.118843   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:04.118879   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:04.162491   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:04.162548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:04.215100   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:04.215134   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:04.229043   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:04.229069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:04.307480   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:03.495661   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.995711   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.426223   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:08.426470   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.876718   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:07.877431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.807640   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:06.821144   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:06.821203   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:06.857743   79191 cri.go:89] found id: ""
	I0816 00:36:06.857776   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.857786   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:06.857794   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:06.857872   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:06.895980   79191 cri.go:89] found id: ""
	I0816 00:36:06.896007   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.896018   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:06.896025   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:06.896090   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:06.935358   79191 cri.go:89] found id: ""
	I0816 00:36:06.935389   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.935399   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:06.935406   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:06.935461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:06.971533   79191 cri.go:89] found id: ""
	I0816 00:36:06.971561   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.971572   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:06.971580   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:06.971640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:07.007786   79191 cri.go:89] found id: ""
	I0816 00:36:07.007812   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.007823   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:07.007830   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:07.007890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:07.044060   79191 cri.go:89] found id: ""
	I0816 00:36:07.044092   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.044104   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:07.044112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:07.044185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:07.080058   79191 cri.go:89] found id: ""
	I0816 00:36:07.080085   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.080094   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:07.080101   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:07.080156   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:07.117749   79191 cri.go:89] found id: ""
	I0816 00:36:07.117773   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.117780   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:07.117787   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:07.117799   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:07.171418   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:07.171453   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:07.185520   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:07.185542   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:07.257817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:07.257872   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:07.257888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:07.339530   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:07.339576   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:09.882613   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:09.895873   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:09.895950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:09.936739   79191 cri.go:89] found id: ""
	I0816 00:36:09.936766   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.936774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:09.936780   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:09.936836   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:09.974145   79191 cri.go:89] found id: ""
	I0816 00:36:09.974168   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.974180   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:09.974186   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:09.974243   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:10.012166   79191 cri.go:89] found id: ""
	I0816 00:36:10.012196   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.012206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:10.012214   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:10.012265   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:10.051080   79191 cri.go:89] found id: ""
	I0816 00:36:10.051103   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.051111   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:10.051117   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:10.051176   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:10.088519   79191 cri.go:89] found id: ""
	I0816 00:36:10.088548   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.088559   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:10.088567   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:10.088628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:10.123718   79191 cri.go:89] found id: ""
	I0816 00:36:10.123744   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.123752   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:10.123758   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:10.123805   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:10.161900   79191 cri.go:89] found id: ""
	I0816 00:36:10.161922   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.161929   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:10.161995   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:10.162064   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:10.196380   79191 cri.go:89] found id: ""
	I0816 00:36:10.196408   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.196419   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:10.196429   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:10.196443   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:10.248276   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:10.248309   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:10.262241   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:10.262269   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:10.340562   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:10.340598   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:10.340626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:10.417547   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:10.417578   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:07.996930   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:09.997666   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.426502   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.426976   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.377172   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.877236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.962310   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:12.976278   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:12.976338   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:13.014501   79191 cri.go:89] found id: ""
	I0816 00:36:13.014523   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.014530   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:13.014536   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:13.014587   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:13.055942   79191 cri.go:89] found id: ""
	I0816 00:36:13.055970   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.055979   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:13.055987   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:13.056048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:13.090309   79191 cri.go:89] found id: ""
	I0816 00:36:13.090336   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.090346   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:13.090354   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:13.090413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:13.124839   79191 cri.go:89] found id: ""
	I0816 00:36:13.124865   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.124876   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:13.124884   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:13.124945   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:13.164535   79191 cri.go:89] found id: ""
	I0816 00:36:13.164560   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.164567   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:13.164573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:13.164630   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:13.198651   79191 cri.go:89] found id: ""
	I0816 00:36:13.198699   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.198710   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:13.198718   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:13.198785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:13.233255   79191 cri.go:89] found id: ""
	I0816 00:36:13.233278   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.233286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:13.233292   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:13.233348   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:13.267327   79191 cri.go:89] found id: ""
	I0816 00:36:13.267351   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.267359   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:13.267367   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:13.267384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:13.352053   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:13.352089   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:13.393438   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:13.393471   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:13.445397   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:13.445430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:13.459143   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:13.459177   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:13.530160   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.031296   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:16.045557   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:16.045618   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:16.081828   79191 cri.go:89] found id: ""
	I0816 00:36:16.081871   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.081882   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:16.081890   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:16.081949   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:16.116228   79191 cri.go:89] found id: ""
	I0816 00:36:16.116254   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.116264   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:16.116272   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:16.116334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:16.150051   79191 cri.go:89] found id: ""
	I0816 00:36:16.150079   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.150087   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:16.150093   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:16.150139   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:16.186218   79191 cri.go:89] found id: ""
	I0816 00:36:16.186241   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.186248   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:16.186254   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:16.186301   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:16.223223   79191 cri.go:89] found id: ""
	I0816 00:36:16.223255   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.223263   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:16.223270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:16.223316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:16.259929   79191 cri.go:89] found id: ""
	I0816 00:36:16.259953   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.259960   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:16.259970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:16.260099   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:16.294611   79191 cri.go:89] found id: ""
	I0816 00:36:16.294633   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.294641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:16.294649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:16.294725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:16.333492   79191 cri.go:89] found id: ""
	I0816 00:36:16.333523   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.333533   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:16.333544   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:16.333563   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:16.385970   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:16.386002   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:16.400359   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:16.400384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:16.471363   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.471388   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:16.471408   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:16.555990   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:16.556022   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:12.495406   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.995145   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.926160   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.426768   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:15.376672   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.876395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.876542   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.099502   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:19.112649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:19.112706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:19.145809   79191 cri.go:89] found id: ""
	I0816 00:36:19.145837   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.145858   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:19.145865   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:19.145928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:19.183737   79191 cri.go:89] found id: ""
	I0816 00:36:19.183763   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.183774   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:19.183781   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:19.183841   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:19.219729   79191 cri.go:89] found id: ""
	I0816 00:36:19.219756   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.219764   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:19.219770   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:19.219815   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:19.254450   79191 cri.go:89] found id: ""
	I0816 00:36:19.254474   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.254481   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:19.254488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:19.254540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:19.289543   79191 cri.go:89] found id: ""
	I0816 00:36:19.289573   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.289585   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:19.289592   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:19.289651   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:19.330727   79191 cri.go:89] found id: ""
	I0816 00:36:19.330748   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.330756   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:19.330762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:19.330809   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:19.368952   79191 cri.go:89] found id: ""
	I0816 00:36:19.368978   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.368986   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:19.368992   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:19.369048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:19.406211   79191 cri.go:89] found id: ""
	I0816 00:36:19.406247   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.406258   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:19.406268   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:19.406282   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:19.457996   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:19.458032   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:19.472247   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:19.472274   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:19.542840   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:19.542862   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:19.542876   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:19.624478   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:19.624520   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:16.997148   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.496434   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.427251   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:21.925550   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.925858   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.376318   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:24.376431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.165884   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:22.180005   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:22.180078   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:22.217434   79191 cri.go:89] found id: ""
	I0816 00:36:22.217463   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.217471   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:22.217478   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:22.217534   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:22.250679   79191 cri.go:89] found id: ""
	I0816 00:36:22.250708   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.250717   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:22.250725   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:22.250785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:22.284294   79191 cri.go:89] found id: ""
	I0816 00:36:22.284324   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.284334   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:22.284341   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:22.284403   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:22.320747   79191 cri.go:89] found id: ""
	I0816 00:36:22.320779   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.320790   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:22.320799   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:22.320858   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:22.355763   79191 cri.go:89] found id: ""
	I0816 00:36:22.355793   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.355803   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:22.355811   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:22.355871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:22.392762   79191 cri.go:89] found id: ""
	I0816 00:36:22.392788   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.392796   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:22.392802   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:22.392860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:22.426577   79191 cri.go:89] found id: ""
	I0816 00:36:22.426605   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.426614   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:22.426621   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:22.426682   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:22.459989   79191 cri.go:89] found id: ""
	I0816 00:36:22.460018   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.460030   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:22.460040   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:22.460054   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:22.545782   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:22.545820   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:22.587404   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:22.587431   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:22.638519   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:22.638559   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:22.653064   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:22.653087   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:22.734333   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.234823   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:25.248716   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:25.248787   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:25.284760   79191 cri.go:89] found id: ""
	I0816 00:36:25.284786   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.284793   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:25.284799   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:25.284870   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:25.325523   79191 cri.go:89] found id: ""
	I0816 00:36:25.325548   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.325556   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:25.325562   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:25.325621   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:25.365050   79191 cri.go:89] found id: ""
	I0816 00:36:25.365078   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.365088   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:25.365096   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:25.365155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:25.405005   79191 cri.go:89] found id: ""
	I0816 00:36:25.405038   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.405049   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:25.405062   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:25.405121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:25.444622   79191 cri.go:89] found id: ""
	I0816 00:36:25.444648   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.444656   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:25.444662   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:25.444710   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:25.485364   79191 cri.go:89] found id: ""
	I0816 00:36:25.485394   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.485404   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:25.485413   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:25.485492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:25.521444   79191 cri.go:89] found id: ""
	I0816 00:36:25.521471   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.521482   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:25.521490   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:25.521550   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:25.556763   79191 cri.go:89] found id: ""
	I0816 00:36:25.556789   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.556796   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:25.556805   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:25.556817   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:25.606725   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:25.606759   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:25.623080   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:25.623108   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:25.705238   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.705258   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:25.705280   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:25.782188   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:25.782224   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:21.994519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.995061   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.494442   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:25.926835   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.427012   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.876206   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.876563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.325018   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:28.337778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:28.337860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:28.378452   79191 cri.go:89] found id: ""
	I0816 00:36:28.378482   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.378492   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:28.378499   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:28.378556   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:28.412103   79191 cri.go:89] found id: ""
	I0816 00:36:28.412132   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.412143   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:28.412150   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:28.412214   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:28.447363   79191 cri.go:89] found id: ""
	I0816 00:36:28.447388   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.447396   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:28.447401   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:28.447452   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:28.481199   79191 cri.go:89] found id: ""
	I0816 00:36:28.481228   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.481242   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:28.481251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:28.481305   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:28.517523   79191 cri.go:89] found id: ""
	I0816 00:36:28.517545   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.517552   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:28.517558   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:28.517620   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:28.552069   79191 cri.go:89] found id: ""
	I0816 00:36:28.552101   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.552112   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:28.552120   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:28.552193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:28.594124   79191 cri.go:89] found id: ""
	I0816 00:36:28.594148   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.594158   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:28.594166   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:28.594228   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:28.631451   79191 cri.go:89] found id: ""
	I0816 00:36:28.631472   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.631480   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:28.631488   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:28.631498   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:28.685335   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:28.685368   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:28.700852   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:28.700877   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:28.773932   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:28.773957   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:28.773972   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:28.848951   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:28.848989   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:31.389208   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:31.403731   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:31.403813   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:31.440979   79191 cri.go:89] found id: ""
	I0816 00:36:31.441010   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.441020   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:31.441028   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:31.441092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:31.476435   79191 cri.go:89] found id: ""
	I0816 00:36:31.476458   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.476465   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:31.476471   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:31.476530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:31.514622   79191 cri.go:89] found id: ""
	I0816 00:36:31.514644   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.514651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:31.514657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:31.514715   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:31.554503   79191 cri.go:89] found id: ""
	I0816 00:36:31.554533   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.554543   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:31.554551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:31.554609   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:31.590283   79191 cri.go:89] found id: ""
	I0816 00:36:31.590317   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.590325   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:31.590332   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:31.590380   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:31.625969   79191 cri.go:89] found id: ""
	I0816 00:36:31.626003   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.626014   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:31.626031   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:31.626102   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:31.660489   79191 cri.go:89] found id: ""
	I0816 00:36:31.660513   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.660520   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:31.660526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:31.660583   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:31.694728   79191 cri.go:89] found id: ""
	I0816 00:36:31.694761   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.694769   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:31.694779   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:31.694790   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:31.760631   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:31.760663   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:31.774858   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:31.774886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:36:28.994228   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.994276   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.926313   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.426045   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.877175   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.378602   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:36:31.851125   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:31.851145   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:31.851156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:31.934491   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:31.934521   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:34.476368   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:34.489252   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:34.489308   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:34.524932   79191 cri.go:89] found id: ""
	I0816 00:36:34.524964   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.524972   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:34.524977   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:34.525032   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:34.559434   79191 cri.go:89] found id: ""
	I0816 00:36:34.559462   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.559473   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:34.559481   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:34.559543   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:34.598700   79191 cri.go:89] found id: ""
	I0816 00:36:34.598728   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.598739   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:34.598747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:34.598808   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:34.632413   79191 cri.go:89] found id: ""
	I0816 00:36:34.632438   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.632448   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:34.632456   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:34.632514   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:34.668385   79191 cri.go:89] found id: ""
	I0816 00:36:34.668409   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.668418   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:34.668425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:34.668486   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:34.703728   79191 cri.go:89] found id: ""
	I0816 00:36:34.703754   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.703764   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:34.703772   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:34.703832   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:34.743119   79191 cri.go:89] found id: ""
	I0816 00:36:34.743152   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.743161   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:34.743171   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:34.743230   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:34.778932   79191 cri.go:89] found id: ""
	I0816 00:36:34.778955   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.778963   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:34.778971   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:34.778987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:34.832050   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:34.832084   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:34.845700   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:34.845728   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:34.917535   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:34.917554   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:34.917565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:35.005262   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:35.005295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:32.994435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:34.994503   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.926422   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.876400   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:38.376351   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.547107   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:37.562035   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:37.562095   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:37.605992   79191 cri.go:89] found id: ""
	I0816 00:36:37.606021   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.606028   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:37.606035   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:37.606092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:37.642613   79191 cri.go:89] found id: ""
	I0816 00:36:37.642642   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.642653   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:37.642660   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:37.642708   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:37.677810   79191 cri.go:89] found id: ""
	I0816 00:36:37.677863   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.677875   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:37.677883   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:37.677939   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:37.714490   79191 cri.go:89] found id: ""
	I0816 00:36:37.714514   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.714522   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:37.714529   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:37.714575   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:37.750807   79191 cri.go:89] found id: ""
	I0816 00:36:37.750837   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.750844   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:37.750850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:37.750912   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:37.790307   79191 cri.go:89] found id: ""
	I0816 00:36:37.790337   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.790347   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:37.790355   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:37.790404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:37.826811   79191 cri.go:89] found id: ""
	I0816 00:36:37.826838   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.826848   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:37.826856   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:37.826920   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:37.862066   79191 cri.go:89] found id: ""
	I0816 00:36:37.862091   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.862101   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:37.862112   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:37.862127   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:37.917127   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:37.917161   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:37.932986   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:37.933024   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:38.008715   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:38.008739   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:38.008754   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:38.088744   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:38.088778   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:40.643426   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:40.659064   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:40.659128   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:40.702486   79191 cri.go:89] found id: ""
	I0816 00:36:40.702513   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.702523   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:40.702530   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:40.702595   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:40.736016   79191 cri.go:89] found id: ""
	I0816 00:36:40.736044   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.736057   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:40.736064   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:40.736125   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:40.779665   79191 cri.go:89] found id: ""
	I0816 00:36:40.779704   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.779724   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:40.779733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:40.779795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:40.818612   79191 cri.go:89] found id: ""
	I0816 00:36:40.818633   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.818640   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:40.818647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:40.818695   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:40.855990   79191 cri.go:89] found id: ""
	I0816 00:36:40.856014   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.856021   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:40.856027   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:40.856074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:40.894792   79191 cri.go:89] found id: ""
	I0816 00:36:40.894827   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.894836   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:40.894845   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:40.894894   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:40.932233   79191 cri.go:89] found id: ""
	I0816 00:36:40.932255   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.932263   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:40.932268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:40.932324   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:40.974601   79191 cri.go:89] found id: ""
	I0816 00:36:40.974624   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.974633   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:40.974642   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:40.974660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:41.049185   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:41.049209   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:41.049223   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:41.129446   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:41.129481   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:41.170312   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:41.170341   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:41.226217   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:41.226254   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:36.995268   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:39.494273   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:41.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.426501   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.926122   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.877227   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.878644   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:43.741485   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:43.756248   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:43.756325   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:43.792440   79191 cri.go:89] found id: ""
	I0816 00:36:43.792469   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.792480   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:43.792488   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:43.792549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:43.829906   79191 cri.go:89] found id: ""
	I0816 00:36:43.829933   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.829941   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:43.829947   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:43.830003   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:43.880305   79191 cri.go:89] found id: ""
	I0816 00:36:43.880330   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.880337   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:43.880343   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:43.880399   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:43.937899   79191 cri.go:89] found id: ""
	I0816 00:36:43.937929   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.937939   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:43.937953   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:43.938023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:43.997578   79191 cri.go:89] found id: ""
	I0816 00:36:43.997603   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.997610   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:43.997620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:43.997672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:44.035606   79191 cri.go:89] found id: ""
	I0816 00:36:44.035629   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.035637   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:44.035643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:44.035692   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:44.072919   79191 cri.go:89] found id: ""
	I0816 00:36:44.072950   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.072961   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:44.072968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:44.073043   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:44.108629   79191 cri.go:89] found id: ""
	I0816 00:36:44.108659   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.108681   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:44.108692   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:44.108705   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:44.149127   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:44.149151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:44.201694   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:44.201737   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:44.217161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:44.217199   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:44.284335   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:44.284362   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:44.284379   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:43.996478   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.494382   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:44.926542   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.926713   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:45.376030   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:47.875418   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.877201   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.869196   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:46.883519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:46.883584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:46.924767   79191 cri.go:89] found id: ""
	I0816 00:36:46.924806   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.924821   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:46.924829   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:46.924889   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:46.963282   79191 cri.go:89] found id: ""
	I0816 00:36:46.963309   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.963320   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:46.963327   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:46.963389   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:47.001421   79191 cri.go:89] found id: ""
	I0816 00:36:47.001450   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.001458   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:47.001463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:47.001518   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:47.037679   79191 cri.go:89] found id: ""
	I0816 00:36:47.037702   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.037713   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:47.037720   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:47.037778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:47.078009   79191 cri.go:89] found id: ""
	I0816 00:36:47.078039   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.078050   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:47.078056   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:47.078113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:47.119032   79191 cri.go:89] found id: ""
	I0816 00:36:47.119056   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.119064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:47.119069   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:47.119127   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:47.154893   79191 cri.go:89] found id: ""
	I0816 00:36:47.154919   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.154925   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:47.154933   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:47.154993   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:47.194544   79191 cri.go:89] found id: ""
	I0816 00:36:47.194571   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.194582   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:47.194592   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:47.194612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:47.267148   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:47.267172   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:47.267186   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:47.345257   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:47.345295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:47.386207   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:47.386233   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:47.436171   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:47.436201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:49.949977   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:49.965702   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:49.965761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:50.002443   79191 cri.go:89] found id: ""
	I0816 00:36:50.002470   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.002481   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:50.002489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:50.002548   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:50.039123   79191 cri.go:89] found id: ""
	I0816 00:36:50.039155   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.039162   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:50.039168   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:50.039220   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:50.074487   79191 cri.go:89] found id: ""
	I0816 00:36:50.074517   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.074527   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:50.074535   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:50.074593   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:50.108980   79191 cri.go:89] found id: ""
	I0816 00:36:50.109008   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.109018   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:50.109025   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:50.109082   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:50.149182   79191 cri.go:89] found id: ""
	I0816 00:36:50.149202   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.149209   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:50.149215   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:50.149261   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:50.183066   79191 cri.go:89] found id: ""
	I0816 00:36:50.183094   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.183102   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:50.183108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:50.183165   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:50.220200   79191 cri.go:89] found id: ""
	I0816 00:36:50.220231   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.220240   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:50.220246   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:50.220302   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:50.258059   79191 cri.go:89] found id: ""
	I0816 00:36:50.258083   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.258092   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:50.258100   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:50.258110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:50.300560   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:50.300591   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:50.350548   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:50.350581   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:50.364792   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:50.364816   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:50.437723   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:50.437746   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:50.437761   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:48.995009   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:50.995542   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.425926   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:51.427896   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.926363   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:52.375826   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:54.876435   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.015846   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:53.029184   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:53.029246   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:53.064306   79191 cri.go:89] found id: ""
	I0816 00:36:53.064338   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.064346   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:53.064352   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:53.064404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:53.104425   79191 cri.go:89] found id: ""
	I0816 00:36:53.104458   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.104468   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:53.104476   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:53.104538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:53.139470   79191 cri.go:89] found id: ""
	I0816 00:36:53.139493   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.139500   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:53.139506   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:53.139551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:53.185195   79191 cri.go:89] found id: ""
	I0816 00:36:53.185225   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.185234   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:53.185242   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:53.185300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:53.221897   79191 cri.go:89] found id: ""
	I0816 00:36:53.221925   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.221935   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:53.221943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:53.222006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:53.258810   79191 cri.go:89] found id: ""
	I0816 00:36:53.258841   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.258852   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:53.258859   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:53.258924   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:53.298672   79191 cri.go:89] found id: ""
	I0816 00:36:53.298701   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.298711   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:53.298719   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:53.298778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:53.333498   79191 cri.go:89] found id: ""
	I0816 00:36:53.333520   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.333527   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:53.333535   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:53.333548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.370495   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:53.370530   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:53.423938   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:53.423982   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:53.438897   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:53.438926   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:53.505951   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:53.505973   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:53.505987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.089638   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:56.103832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:56.103893   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:56.148010   79191 cri.go:89] found id: ""
	I0816 00:36:56.148038   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.148048   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:56.148057   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:56.148120   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:56.185631   79191 cri.go:89] found id: ""
	I0816 00:36:56.185663   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.185673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:56.185680   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:56.185739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:56.222064   79191 cri.go:89] found id: ""
	I0816 00:36:56.222093   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.222104   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:56.222112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:56.222162   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:56.260462   79191 cri.go:89] found id: ""
	I0816 00:36:56.260494   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.260504   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:56.260513   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:56.260574   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:56.296125   79191 cri.go:89] found id: ""
	I0816 00:36:56.296154   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.296164   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:56.296172   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:56.296236   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:56.333278   79191 cri.go:89] found id: ""
	I0816 00:36:56.333305   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.333316   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:56.333324   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:56.333385   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:56.368924   79191 cri.go:89] found id: ""
	I0816 00:36:56.368952   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.368962   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:56.368970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:56.369034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:56.407148   79191 cri.go:89] found id: ""
	I0816 00:36:56.407180   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.407190   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:56.407201   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:56.407215   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:56.464745   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:56.464779   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:56.478177   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:56.478204   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:56.555827   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:56.555851   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:56.555864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.640001   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:56.640040   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.495546   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.994786   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:58.426865   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:57.376484   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.876765   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.181423   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:59.195722   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:59.195804   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:59.232043   79191 cri.go:89] found id: ""
	I0816 00:36:59.232067   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.232075   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:59.232081   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:59.232132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:59.270628   79191 cri.go:89] found id: ""
	I0816 00:36:59.270656   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.270673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:59.270681   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:59.270743   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:59.304054   79191 cri.go:89] found id: ""
	I0816 00:36:59.304089   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.304100   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:59.304108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:59.304169   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:59.339386   79191 cri.go:89] found id: ""
	I0816 00:36:59.339410   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.339417   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:59.339423   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:59.339483   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:59.381313   79191 cri.go:89] found id: ""
	I0816 00:36:59.381361   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.381376   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:59.381385   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:59.381449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:59.417060   79191 cri.go:89] found id: ""
	I0816 00:36:59.417090   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.417101   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:59.417109   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:59.417160   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:59.461034   79191 cri.go:89] found id: ""
	I0816 00:36:59.461060   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.461071   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:59.461078   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:59.461136   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:59.496248   79191 cri.go:89] found id: ""
	I0816 00:36:59.496276   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.496286   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:59.496297   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:59.496312   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:59.566779   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:59.566803   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:59.566829   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:59.651999   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:59.652034   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:59.693286   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:59.693310   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:59.746677   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:59.746711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:58.494370   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.494959   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.927036   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:03.425008   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.376921   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.876676   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.262527   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:02.277903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:02.277965   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:02.323846   79191 cri.go:89] found id: ""
	I0816 00:37:02.323868   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.323876   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:02.323882   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:02.323938   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:02.359552   79191 cri.go:89] found id: ""
	I0816 00:37:02.359578   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.359589   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:02.359596   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:02.359657   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:02.395062   79191 cri.go:89] found id: ""
	I0816 00:37:02.395087   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.395094   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:02.395100   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:02.395155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:02.432612   79191 cri.go:89] found id: ""
	I0816 00:37:02.432636   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.432646   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:02.432654   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:02.432712   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:02.468612   79191 cri.go:89] found id: ""
	I0816 00:37:02.468640   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.468651   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:02.468659   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:02.468716   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:02.514472   79191 cri.go:89] found id: ""
	I0816 00:37:02.514500   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.514511   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:02.514519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:02.514576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:02.551964   79191 cri.go:89] found id: ""
	I0816 00:37:02.551993   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.552003   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:02.552011   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:02.552061   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:02.588018   79191 cri.go:89] found id: ""
	I0816 00:37:02.588044   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.588053   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:02.588063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:02.588081   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:02.638836   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:02.638875   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:02.653581   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:02.653613   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:02.737018   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.737047   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:02.737065   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:02.819726   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:02.819763   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.364943   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:05.379433   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:05.379492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:05.419165   79191 cri.go:89] found id: ""
	I0816 00:37:05.419191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.419198   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:05.419204   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:05.419264   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:05.454417   79191 cri.go:89] found id: ""
	I0816 00:37:05.454438   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.454446   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:05.454452   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:05.454497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:05.490162   79191 cri.go:89] found id: ""
	I0816 00:37:05.490191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.490203   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:05.490210   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:05.490268   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:05.527303   79191 cri.go:89] found id: ""
	I0816 00:37:05.527327   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.527334   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:05.527340   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:05.527393   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:05.562271   79191 cri.go:89] found id: ""
	I0816 00:37:05.562302   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.562310   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:05.562316   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:05.562374   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:05.597800   79191 cri.go:89] found id: ""
	I0816 00:37:05.597823   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.597830   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:05.597837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:05.597905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:05.633996   79191 cri.go:89] found id: ""
	I0816 00:37:05.634021   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.634028   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:05.634034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:05.634088   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:05.672408   79191 cri.go:89] found id: ""
	I0816 00:37:05.672437   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.672446   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:05.672457   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:05.672472   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:05.750956   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:05.750995   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.795573   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:05.795603   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:05.848560   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:05.848593   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:05.862245   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:05.862268   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:05.938704   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.495728   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.994839   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:05.425507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:07.426459   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:06.877664   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.375601   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:08.439692   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:08.452850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:08.452927   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:08.490015   79191 cri.go:89] found id: ""
	I0816 00:37:08.490043   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.490053   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:08.490060   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:08.490121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:08.529631   79191 cri.go:89] found id: ""
	I0816 00:37:08.529665   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.529676   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:08.529689   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:08.529747   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:08.564858   79191 cri.go:89] found id: ""
	I0816 00:37:08.564885   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.564896   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:08.564904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:08.564966   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:08.601144   79191 cri.go:89] found id: ""
	I0816 00:37:08.601180   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.601190   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:08.601200   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:08.601257   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:08.637050   79191 cri.go:89] found id: ""
	I0816 00:37:08.637081   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.637090   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:08.637098   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:08.637158   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:08.670613   79191 cri.go:89] found id: ""
	I0816 00:37:08.670644   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.670655   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:08.670663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:08.670727   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:08.704664   79191 cri.go:89] found id: ""
	I0816 00:37:08.704690   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.704698   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:08.704704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:08.704754   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:08.741307   79191 cri.go:89] found id: ""
	I0816 00:37:08.741337   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.741348   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:08.741360   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:08.741374   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:08.755434   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:08.755459   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:08.828118   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:08.828140   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:08.828151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:08.911565   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:08.911605   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:08.954907   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:08.954937   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.508848   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:11.521998   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:11.522060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:11.558581   79191 cri.go:89] found id: ""
	I0816 00:37:11.558611   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.558622   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:11.558630   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:11.558697   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:11.593798   79191 cri.go:89] found id: ""
	I0816 00:37:11.593822   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.593830   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:11.593836   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:11.593905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:11.629619   79191 cri.go:89] found id: ""
	I0816 00:37:11.629648   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.629658   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:11.629664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:11.629717   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:11.666521   79191 cri.go:89] found id: ""
	I0816 00:37:11.666548   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.666556   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:11.666562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:11.666607   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:11.703374   79191 cri.go:89] found id: ""
	I0816 00:37:11.703406   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.703417   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:11.703427   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:11.703491   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:11.739374   79191 cri.go:89] found id: ""
	I0816 00:37:11.739403   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.739413   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:11.739420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:11.739475   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:11.774981   79191 cri.go:89] found id: ""
	I0816 00:37:11.775006   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.775013   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:11.775019   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:11.775074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:06.995675   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.495024   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:12.428179   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.377241   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.875723   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.809561   79191 cri.go:89] found id: ""
	I0816 00:37:11.809590   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.809601   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:11.809612   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:11.809626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.863071   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:11.863116   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:11.878161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:11.878191   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:11.953572   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.953594   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:11.953608   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:12.035815   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:12.035848   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:14.576547   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:14.590747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:14.590802   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:14.626732   79191 cri.go:89] found id: ""
	I0816 00:37:14.626762   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.626774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:14.626781   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:14.626833   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:14.662954   79191 cri.go:89] found id: ""
	I0816 00:37:14.662978   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.662988   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:14.662996   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:14.663057   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:14.697618   79191 cri.go:89] found id: ""
	I0816 00:37:14.697646   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.697656   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:14.697663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:14.697725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:14.735137   79191 cri.go:89] found id: ""
	I0816 00:37:14.735161   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.735168   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:14.735174   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:14.735222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:14.770625   79191 cri.go:89] found id: ""
	I0816 00:37:14.770648   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.770655   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:14.770660   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:14.770718   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:14.808678   79191 cri.go:89] found id: ""
	I0816 00:37:14.808708   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.808718   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:14.808726   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:14.808795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:14.847321   79191 cri.go:89] found id: ""
	I0816 00:37:14.847349   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.847360   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:14.847368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:14.847425   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:14.886110   79191 cri.go:89] found id: ""
	I0816 00:37:14.886136   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.886147   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:14.886156   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:14.886175   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:14.971978   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:14.972013   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:15.015620   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:15.015644   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:15.067372   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:15.067405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:15.081629   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:15.081652   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:15.151580   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.995551   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.995831   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.495016   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:14.926297   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.926367   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:18.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:15.876514   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.877987   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.652362   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:17.666201   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:17.666278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:17.698723   79191 cri.go:89] found id: ""
	I0816 00:37:17.698760   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.698772   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:17.698778   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:17.698827   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:17.732854   79191 cri.go:89] found id: ""
	I0816 00:37:17.732883   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.732893   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:17.732901   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:17.732957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:17.767665   79191 cri.go:89] found id: ""
	I0816 00:37:17.767691   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.767701   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:17.767709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:17.767769   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:17.801490   79191 cri.go:89] found id: ""
	I0816 00:37:17.801512   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.801520   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:17.801526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:17.801579   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:17.837451   79191 cri.go:89] found id: ""
	I0816 00:37:17.837479   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.837490   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:17.837498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:17.837562   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:17.872898   79191 cri.go:89] found id: ""
	I0816 00:37:17.872924   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.872934   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:17.872943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:17.873002   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:17.910325   79191 cri.go:89] found id: ""
	I0816 00:37:17.910352   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.910362   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:17.910370   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:17.910431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:17.946885   79191 cri.go:89] found id: ""
	I0816 00:37:17.946909   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.946916   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:17.946923   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:17.946935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:18.014011   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:18.014045   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:18.028850   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:18.028886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:18.099362   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:18.099381   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:18.099396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:18.180552   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:18.180588   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:20.720810   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:20.733806   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:20.733887   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:20.771300   79191 cri.go:89] found id: ""
	I0816 00:37:20.771323   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.771330   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:20.771336   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:20.771394   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:20.812327   79191 cri.go:89] found id: ""
	I0816 00:37:20.812355   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.812362   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:20.812369   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:20.812430   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:20.846830   79191 cri.go:89] found id: ""
	I0816 00:37:20.846861   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.846872   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:20.846879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:20.846948   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:20.889979   79191 cri.go:89] found id: ""
	I0816 00:37:20.890005   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.890015   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:20.890023   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:20.890086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:20.933732   79191 cri.go:89] found id: ""
	I0816 00:37:20.933762   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.933772   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:20.933778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:20.933824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:20.972341   79191 cri.go:89] found id: ""
	I0816 00:37:20.972368   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.972376   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:20.972382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:20.972444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:21.011179   79191 cri.go:89] found id: ""
	I0816 00:37:21.011207   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.011216   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:21.011224   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:21.011282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:21.045645   79191 cri.go:89] found id: ""
	I0816 00:37:21.045668   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.045675   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:21.045684   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:21.045694   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:21.099289   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:21.099321   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:21.113814   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:21.113858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:21.186314   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:21.186337   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:21.186355   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:21.271116   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:21.271152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:18.994476   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.996435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:21.425187   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.425456   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.377999   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:22.877014   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.818598   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:23.832330   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:23.832387   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:23.869258   79191 cri.go:89] found id: ""
	I0816 00:37:23.869279   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.869286   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:23.869293   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:23.869342   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:23.903958   79191 cri.go:89] found id: ""
	I0816 00:37:23.903989   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.903999   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:23.904006   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:23.904060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:23.943110   79191 cri.go:89] found id: ""
	I0816 00:37:23.943142   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.943153   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:23.943160   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:23.943222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:23.979325   79191 cri.go:89] found id: ""
	I0816 00:37:23.979356   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.979366   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:23.979374   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:23.979435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:24.017570   79191 cri.go:89] found id: ""
	I0816 00:37:24.017597   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.017607   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:24.017614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:24.017684   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:24.051522   79191 cri.go:89] found id: ""
	I0816 00:37:24.051546   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.051555   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:24.051562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:24.051626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:24.087536   79191 cri.go:89] found id: ""
	I0816 00:37:24.087561   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.087572   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:24.087579   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:24.087644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:24.123203   79191 cri.go:89] found id: ""
	I0816 00:37:24.123233   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.123245   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:24.123256   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:24.123276   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:24.178185   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:24.178225   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:24.192895   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:24.192920   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:24.273471   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:24.273492   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:24.273504   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:24.357890   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:24.357936   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:23.495269   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.994859   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.427328   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.927068   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.376932   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.377168   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:29.876182   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:26.950399   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:26.964347   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:26.964406   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:27.004694   79191 cri.go:89] found id: ""
	I0816 00:37:27.004722   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.004738   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:27.004745   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:27.004800   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:27.040051   79191 cri.go:89] found id: ""
	I0816 00:37:27.040080   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.040090   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:27.040096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:27.040144   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:27.088614   79191 cri.go:89] found id: ""
	I0816 00:37:27.088642   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.088651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:27.088657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:27.088732   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:27.125427   79191 cri.go:89] found id: ""
	I0816 00:37:27.125450   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.125457   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:27.125464   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:27.125511   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:27.158562   79191 cri.go:89] found id: ""
	I0816 00:37:27.158592   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.158602   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:27.158609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:27.158672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:27.192986   79191 cri.go:89] found id: ""
	I0816 00:37:27.193015   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.193026   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:27.193034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:27.193091   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:27.228786   79191 cri.go:89] found id: ""
	I0816 00:37:27.228828   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.228847   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:27.228858   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:27.228921   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:27.262776   79191 cri.go:89] found id: ""
	I0816 00:37:27.262808   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.262819   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:27.262829   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:27.262844   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:27.276444   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:27.276470   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:27.349918   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:27.349946   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:27.349958   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:27.435030   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:27.435061   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:27.484043   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:27.484069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.038376   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:30.051467   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:30.051530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:30.086346   79191 cri.go:89] found id: ""
	I0816 00:37:30.086376   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.086386   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:30.086394   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:30.086454   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:30.127665   79191 cri.go:89] found id: ""
	I0816 00:37:30.127691   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.127699   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:30.127704   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:30.127757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:30.169901   79191 cri.go:89] found id: ""
	I0816 00:37:30.169929   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.169939   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:30.169950   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:30.170013   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:30.212501   79191 cri.go:89] found id: ""
	I0816 00:37:30.212523   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.212530   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:30.212537   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:30.212584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:30.256560   79191 cri.go:89] found id: ""
	I0816 00:37:30.256583   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.256591   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:30.256597   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:30.256646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:30.291062   79191 cri.go:89] found id: ""
	I0816 00:37:30.291086   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.291093   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:30.291099   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:30.291143   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:30.328325   79191 cri.go:89] found id: ""
	I0816 00:37:30.328353   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.328361   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:30.328368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:30.328415   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:30.364946   79191 cri.go:89] found id: ""
	I0816 00:37:30.364972   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.364981   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:30.364991   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:30.365005   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:30.408090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:30.408117   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.463421   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:30.463456   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:30.479679   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:30.479711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:30.555394   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:30.555416   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:30.555432   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:28.494477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.494598   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.427146   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:32.926282   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:31.877446   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.376145   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:33.137366   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:33.150970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:33.151030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:33.191020   79191 cri.go:89] found id: ""
	I0816 00:37:33.191047   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.191055   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:33.191061   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:33.191112   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:33.227971   79191 cri.go:89] found id: ""
	I0816 00:37:33.228022   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.228030   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:33.228038   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:33.228089   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:33.265036   79191 cri.go:89] found id: ""
	I0816 00:37:33.265065   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.265074   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:33.265079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:33.265126   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:33.300385   79191 cri.go:89] found id: ""
	I0816 00:37:33.300411   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.300418   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:33.300425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:33.300487   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:33.335727   79191 cri.go:89] found id: ""
	I0816 00:37:33.335757   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.335768   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:33.335776   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:33.335839   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:33.373458   79191 cri.go:89] found id: ""
	I0816 00:37:33.373489   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.373500   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:33.373507   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:33.373568   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:33.410380   79191 cri.go:89] found id: ""
	I0816 00:37:33.410404   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.410413   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:33.410420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:33.410480   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:33.451007   79191 cri.go:89] found id: ""
	I0816 00:37:33.451030   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.451040   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:33.451049   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:33.451062   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:33.502215   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:33.502249   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:33.516123   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:33.516152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:33.590898   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:33.590921   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:33.590944   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:33.668404   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:33.668455   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:36.209671   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:36.223498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:36.223561   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:36.258980   79191 cri.go:89] found id: ""
	I0816 00:37:36.259041   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.259056   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:36.259064   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:36.259123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:36.293659   79191 cri.go:89] found id: ""
	I0816 00:37:36.293687   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.293694   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:36.293703   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:36.293761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:36.331729   79191 cri.go:89] found id: ""
	I0816 00:37:36.331756   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.331766   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:36.331773   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:36.331830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:36.368441   79191 cri.go:89] found id: ""
	I0816 00:37:36.368470   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.368479   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:36.368486   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:36.368533   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:36.405338   79191 cri.go:89] found id: ""
	I0816 00:37:36.405368   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.405380   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:36.405389   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:36.405448   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:36.441986   79191 cri.go:89] found id: ""
	I0816 00:37:36.442018   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.442029   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:36.442038   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:36.442097   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:36.478102   79191 cri.go:89] found id: ""
	I0816 00:37:36.478183   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.478197   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:36.478206   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:36.478269   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:36.517138   79191 cri.go:89] found id: ""
	I0816 00:37:36.517167   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.517178   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:36.517190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:36.517205   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:36.570009   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:36.570042   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:36.583534   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:36.583565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:36.651765   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:36.651794   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:36.651808   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:36.732836   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:36.732870   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:32.495090   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.996253   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.926615   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:37.425790   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:36.377305   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:38.876443   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.274490   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:39.288528   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:39.288591   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:39.325560   79191 cri.go:89] found id: ""
	I0816 00:37:39.325582   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.325589   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:39.325599   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:39.325656   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:39.365795   79191 cri.go:89] found id: ""
	I0816 00:37:39.365822   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.365829   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:39.365837   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:39.365906   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:39.404933   79191 cri.go:89] found id: ""
	I0816 00:37:39.404961   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.404971   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:39.404977   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:39.405041   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:39.442712   79191 cri.go:89] found id: ""
	I0816 00:37:39.442736   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.442747   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:39.442754   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:39.442814   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:39.484533   79191 cri.go:89] found id: ""
	I0816 00:37:39.484557   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.484566   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:39.484573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:39.484636   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:39.522089   79191 cri.go:89] found id: ""
	I0816 00:37:39.522115   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.522125   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:39.522133   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:39.522194   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:39.557099   79191 cri.go:89] found id: ""
	I0816 00:37:39.557128   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.557138   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:39.557145   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:39.557205   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:39.594809   79191 cri.go:89] found id: ""
	I0816 00:37:39.594838   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.594849   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:39.594859   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:39.594874   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:39.611079   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:39.611110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:39.683156   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:39.683182   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:39.683198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:39.761198   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:39.761235   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:39.800972   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:39.801003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:37.494553   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.495854   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.427910   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.926445   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.376128   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:43.377791   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:42.354816   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:42.368610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:42.368673   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:42.404716   79191 cri.go:89] found id: ""
	I0816 00:37:42.404738   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.404745   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:42.404753   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:42.404798   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:42.441619   79191 cri.go:89] found id: ""
	I0816 00:37:42.441649   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.441660   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:42.441667   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:42.441726   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:42.480928   79191 cri.go:89] found id: ""
	I0816 00:37:42.480965   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.480976   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:42.480983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:42.481051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:42.519187   79191 cri.go:89] found id: ""
	I0816 00:37:42.519216   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.519226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:42.519234   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:42.519292   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:42.554928   79191 cri.go:89] found id: ""
	I0816 00:37:42.554956   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.554967   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:42.554974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:42.555035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:42.593436   79191 cri.go:89] found id: ""
	I0816 00:37:42.593472   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.593481   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:42.593487   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:42.593545   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:42.628078   79191 cri.go:89] found id: ""
	I0816 00:37:42.628101   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.628108   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:42.628113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:42.628172   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:42.662824   79191 cri.go:89] found id: ""
	I0816 00:37:42.662852   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.662862   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:42.662871   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:42.662888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:42.677267   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:42.677290   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:42.749570   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:42.749599   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:42.749615   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:42.831177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:42.831213   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:42.871928   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:42.871957   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.430704   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:45.444400   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:45.444461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:45.479503   79191 cri.go:89] found id: ""
	I0816 00:37:45.479529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.479537   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:45.479543   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:45.479596   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:45.518877   79191 cri.go:89] found id: ""
	I0816 00:37:45.518907   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.518917   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:45.518925   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:45.518992   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:45.553936   79191 cri.go:89] found id: ""
	I0816 00:37:45.553966   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.553977   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:45.553984   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:45.554035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:45.593054   79191 cri.go:89] found id: ""
	I0816 00:37:45.593081   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.593088   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:45.593095   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:45.593147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:45.631503   79191 cri.go:89] found id: ""
	I0816 00:37:45.631529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.631537   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:45.631543   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:45.631599   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:45.667435   79191 cri.go:89] found id: ""
	I0816 00:37:45.667459   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.667466   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:45.667473   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:45.667529   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:45.702140   79191 cri.go:89] found id: ""
	I0816 00:37:45.702168   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.702179   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:45.702187   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:45.702250   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:45.736015   79191 cri.go:89] found id: ""
	I0816 00:37:45.736048   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.736059   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:45.736070   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:45.736085   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:45.817392   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:45.817427   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:45.856421   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:45.856451   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.912429   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:45.912476   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:45.928411   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:45.928435   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:46.001141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:41.995835   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.497033   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.426414   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:46.927720   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:45.876721   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:47.877185   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.877396   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.501317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:48.515114   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:48.515190   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:48.553776   79191 cri.go:89] found id: ""
	I0816 00:37:48.553802   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.553810   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:48.553816   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:48.553890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:48.589760   79191 cri.go:89] found id: ""
	I0816 00:37:48.589786   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.589794   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:48.589800   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:48.589871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:48.629792   79191 cri.go:89] found id: ""
	I0816 00:37:48.629816   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.629825   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:48.629833   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:48.629898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:48.668824   79191 cri.go:89] found id: ""
	I0816 00:37:48.668852   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.668860   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:48.668866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:48.668930   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:48.704584   79191 cri.go:89] found id: ""
	I0816 00:37:48.704615   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.704626   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:48.704634   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:48.704691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:48.738833   79191 cri.go:89] found id: ""
	I0816 00:37:48.738855   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.738863   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:48.738868   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:48.738928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:48.774943   79191 cri.go:89] found id: ""
	I0816 00:37:48.774972   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.774981   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:48.774989   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:48.775051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:48.808802   79191 cri.go:89] found id: ""
	I0816 00:37:48.808825   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.808832   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:48.808841   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:48.808856   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:48.858849   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:48.858880   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:48.873338   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:48.873369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:48.950172   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:48.950195   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:48.950209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:49.038642   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:49.038679   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:51.581947   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:51.596612   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:51.596691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:51.631468   79191 cri.go:89] found id: ""
	I0816 00:37:51.631498   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.631509   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:51.631517   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:51.631577   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:51.666922   79191 cri.go:89] found id: ""
	I0816 00:37:51.666953   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.666963   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:51.666971   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:51.667034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:51.707081   79191 cri.go:89] found id: ""
	I0816 00:37:51.707109   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.707116   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:51.707122   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:51.707189   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:51.743884   79191 cri.go:89] found id: ""
	I0816 00:37:51.743912   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.743925   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:51.743932   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:51.743990   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:51.779565   79191 cri.go:89] found id: ""
	I0816 00:37:51.779595   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.779603   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:51.779610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:51.779658   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:46.994211   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.995446   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.495519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.426703   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.426947   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:53.427050   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:52.377050   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.877759   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.818800   79191 cri.go:89] found id: ""
	I0816 00:37:51.818824   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.818831   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:51.818837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:51.818899   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:51.855343   79191 cri.go:89] found id: ""
	I0816 00:37:51.855367   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.855374   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:51.855380   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:51.855426   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:51.890463   79191 cri.go:89] found id: ""
	I0816 00:37:51.890496   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.890505   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:51.890513   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:51.890526   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:51.977168   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:51.977209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:52.021626   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:52.021660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:52.076983   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:52.077027   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:52.092111   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:52.092142   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:52.172738   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:54.673192   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:54.688780   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.688853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.725279   79191 cri.go:89] found id: ""
	I0816 00:37:54.725308   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.725318   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:54.725325   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.725383   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:54.764326   79191 cri.go:89] found id: ""
	I0816 00:37:54.764353   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.764364   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:54.764372   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:54.764423   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:54.805221   79191 cri.go:89] found id: ""
	I0816 00:37:54.805252   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.805263   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:54.805270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:54.805334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:54.849724   79191 cri.go:89] found id: ""
	I0816 00:37:54.849750   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.849759   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:54.849765   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:54.849824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:54.894438   79191 cri.go:89] found id: ""
	I0816 00:37:54.894460   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.894468   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:54.894475   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:54.894532   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:54.933400   79191 cri.go:89] found id: ""
	I0816 00:37:54.933422   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.933431   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:54.933439   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:54.933497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:54.982249   79191 cri.go:89] found id: ""
	I0816 00:37:54.982277   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.982286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:54.982294   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:54.982353   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:55.024431   79191 cri.go:89] found id: ""
	I0816 00:37:55.024458   79191 logs.go:276] 0 containers: []
	W0816 00:37:55.024469   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:55.024479   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.024499   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:55.107089   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:55.107119   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:55.148949   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.148981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.202865   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.202902   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.218528   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.218556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:55.304995   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:53.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:55.995483   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.926671   78713 pod_ready.go:82] duration metric: took 4m0.007058537s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:37:54.926700   78713 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:37:54.926711   78713 pod_ready.go:39] duration metric: took 4m7.919515966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:37:54.926728   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:37:54.926764   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.926821   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.983024   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:54.983043   78713 cri.go:89] found id: ""
	I0816 00:37:54.983052   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:54.983103   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:54.988579   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.988644   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:55.035200   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.035231   78713 cri.go:89] found id: ""
	I0816 00:37:55.035241   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:55.035291   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.040701   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:55.040777   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:55.087306   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.087330   78713 cri.go:89] found id: ""
	I0816 00:37:55.087340   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:55.087422   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.092492   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:55.092560   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:55.144398   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.144424   78713 cri.go:89] found id: ""
	I0816 00:37:55.144433   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:55.144494   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.149882   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:55.149953   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:55.193442   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.193464   78713 cri.go:89] found id: ""
	I0816 00:37:55.193472   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:55.193528   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.198812   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:55.198886   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:55.238634   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.238656   78713 cri.go:89] found id: ""
	I0816 00:37:55.238666   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:55.238729   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.243141   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:55.243229   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:55.281414   78713 cri.go:89] found id: ""
	I0816 00:37:55.281439   78713 logs.go:276] 0 containers: []
	W0816 00:37:55.281449   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:55.281457   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:55.281519   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:55.319336   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.319357   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.319363   78713 cri.go:89] found id: ""
	I0816 00:37:55.319371   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:55.319431   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.323837   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.328777   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:55.328801   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.376259   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:55.376290   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.419553   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:55.419584   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.476026   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.476058   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.544263   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.544297   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.561818   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.561858   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:55.701342   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:55.701375   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:55.746935   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:55.746968   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.787200   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:55.787234   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.825257   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:55.825282   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.865569   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:55.865594   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.905234   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.905269   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:56.391175   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:56.391208   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:58.943163   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:58.961551   78713 api_server.go:72] duration metric: took 4m17.689832084s to wait for apiserver process to appear ...
	I0816 00:37:58.961592   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:37:58.961630   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:58.961697   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:59.001773   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.001794   78713 cri.go:89] found id: ""
	I0816 00:37:59.001803   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:59.001876   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.006168   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:59.006222   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:59.041625   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.041647   78713 cri.go:89] found id: ""
	I0816 00:37:59.041654   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:59.041715   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.046258   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:59.046323   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:59.086070   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.086089   78713 cri.go:89] found id: ""
	I0816 00:37:59.086097   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:59.086151   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.090556   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:59.090626   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:59.129889   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.129931   78713 cri.go:89] found id: ""
	I0816 00:37:59.129942   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:59.130008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.135694   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:59.135775   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.375656   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.375979   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:57.805335   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:57.819904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:57.819989   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:57.856119   79191 cri.go:89] found id: ""
	I0816 00:37:57.856146   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.856153   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:57.856160   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:57.856217   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:57.892797   79191 cri.go:89] found id: ""
	I0816 00:37:57.892825   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.892833   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:57.892841   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:57.892905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:57.928753   79191 cri.go:89] found id: ""
	I0816 00:37:57.928784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.928795   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:57.928803   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:57.928884   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:57.963432   79191 cri.go:89] found id: ""
	I0816 00:37:57.963462   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.963474   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:57.963481   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:57.963538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.998759   79191 cri.go:89] found id: ""
	I0816 00:37:57.998784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.998793   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:57.998801   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:57.998886   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:58.035262   79191 cri.go:89] found id: ""
	I0816 00:37:58.035288   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.035296   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:58.035303   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:58.035358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:58.071052   79191 cri.go:89] found id: ""
	I0816 00:37:58.071079   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.071087   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:58.071092   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:58.071150   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:58.110047   79191 cri.go:89] found id: ""
	I0816 00:37:58.110074   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.110083   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:58.110090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:58.110101   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:58.164792   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:58.164823   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:58.178742   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:58.178770   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:58.251861   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:58.251899   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:58.251921   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:58.329805   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:58.329859   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:00.872911   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:00.887914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:00.887986   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:00.925562   79191 cri.go:89] found id: ""
	I0816 00:38:00.925595   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.925606   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:00.925615   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:00.925669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:00.961476   79191 cri.go:89] found id: ""
	I0816 00:38:00.961498   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.961505   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:00.961510   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:00.961554   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:00.997575   79191 cri.go:89] found id: ""
	I0816 00:38:00.997599   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.997608   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:00.997616   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:00.997677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:01.035130   79191 cri.go:89] found id: ""
	I0816 00:38:01.035158   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.035169   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:01.035177   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:01.035232   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:01.073768   79191 cri.go:89] found id: ""
	I0816 00:38:01.073800   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.073811   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:01.073819   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:01.073898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:01.107904   79191 cri.go:89] found id: ""
	I0816 00:38:01.107928   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.107937   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:01.107943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:01.108004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:01.142654   79191 cri.go:89] found id: ""
	I0816 00:38:01.142690   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.142701   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:01.142709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:01.142766   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:01.187565   79191 cri.go:89] found id: ""
	I0816 00:38:01.187599   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.187610   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:01.187621   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:01.187635   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:01.265462   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:01.265493   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:01.265508   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:01.346988   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:01.347020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:01.390977   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:01.391006   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.443858   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:01.443892   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:57.996188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:00.495210   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.176702   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.176728   78713 cri.go:89] found id: ""
	I0816 00:37:59.176738   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:59.176799   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.182305   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:59.182387   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:59.223938   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.223960   78713 cri.go:89] found id: ""
	I0816 00:37:59.223968   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:59.224023   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.228818   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:59.228884   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:59.264566   78713 cri.go:89] found id: ""
	I0816 00:37:59.264589   78713 logs.go:276] 0 containers: []
	W0816 00:37:59.264597   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:59.264606   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:59.264654   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:59.302534   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.302560   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.302565   78713 cri.go:89] found id: ""
	I0816 00:37:59.302574   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:59.302621   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.307021   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.311258   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:59.311299   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:59.425542   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:59.425574   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.466078   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:59.466107   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:59.480894   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:59.480925   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.524790   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:59.524822   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.568832   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:59.568862   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.619399   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:59.619433   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.658616   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:59.658645   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.720421   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:59.720469   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.756558   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:59.756586   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.798650   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:59.798674   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:59.864280   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:59.864323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:59.913086   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:59.913118   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:02.828194   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:38:02.832896   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:38:02.834035   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:02.834059   78713 api_server.go:131] duration metric: took 3.87246001s to wait for apiserver health ...
	I0816 00:38:02.834067   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:02.834089   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:02.834145   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:02.873489   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:02.873512   78713 cri.go:89] found id: ""
	I0816 00:38:02.873521   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:38:02.873577   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.878807   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:02.878883   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:02.919930   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:02.919949   78713 cri.go:89] found id: ""
	I0816 00:38:02.919957   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:38:02.920008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.924459   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:02.924525   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:02.964609   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:02.964636   78713 cri.go:89] found id: ""
	I0816 00:38:02.964644   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:38:02.964697   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.968808   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:02.968921   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:03.017177   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.017201   78713 cri.go:89] found id: ""
	I0816 00:38:03.017210   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:38:03.017275   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.021905   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:03.021992   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:03.061720   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.061741   78713 cri.go:89] found id: ""
	I0816 00:38:03.061748   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:38:03.061801   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.066149   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:03.066206   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:03.107130   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.107149   78713 cri.go:89] found id: ""
	I0816 00:38:03.107156   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:38:03.107213   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.111323   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:03.111372   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:03.149906   78713 cri.go:89] found id: ""
	I0816 00:38:03.149927   78713 logs.go:276] 0 containers: []
	W0816 00:38:03.149934   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:03.149940   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:03.150000   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:03.190981   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.191007   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.191011   78713 cri.go:89] found id: ""
	I0816 00:38:03.191018   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:38:03.191066   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.195733   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.199755   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:03.199775   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:03.302209   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:38:03.302239   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:03.352505   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:38:03.352548   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.392296   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:38:03.392323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.448092   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:38:03.448130   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.487516   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:38:03.487541   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:03.541954   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:03.541989   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:03.557026   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:38:03.557049   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:03.602639   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:38:03.602670   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:03.642706   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:38:03.642733   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.683504   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:38:03.683530   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.721802   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:03.721826   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.089579   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.089621   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.376613   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:03.376837   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:06.679744   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:06.679797   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.679805   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.679812   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.679819   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.679825   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.679849   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.679861   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.679869   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.679878   78713 system_pods.go:74] duration metric: took 3.845804999s to wait for pod list to return data ...
	I0816 00:38:06.679886   78713 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:06.682521   78713 default_sa.go:45] found service account: "default"
	I0816 00:38:06.682553   78713 default_sa.go:55] duration metric: took 2.660224ms for default service account to be created ...
	I0816 00:38:06.682565   78713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:06.688149   78713 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:06.688178   78713 system_pods.go:89] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.688183   78713 system_pods.go:89] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.688187   78713 system_pods.go:89] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.688192   78713 system_pods.go:89] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.688196   78713 system_pods.go:89] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.688199   78713 system_pods.go:89] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.688206   78713 system_pods.go:89] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.688213   78713 system_pods.go:89] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.688220   78713 system_pods.go:126] duration metric: took 5.649758ms to wait for k8s-apps to be running ...
	I0816 00:38:06.688226   78713 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:06.688268   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:06.706263   78713 system_svc.go:56] duration metric: took 18.025675ms WaitForService to wait for kubelet
	I0816 00:38:06.706301   78713 kubeadm.go:582] duration metric: took 4m25.434584326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:06.706337   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:06.709536   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:06.709553   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:06.709565   78713 node_conditions.go:105] duration metric: took 3.213145ms to run NodePressure ...
	I0816 00:38:06.709576   78713 start.go:241] waiting for startup goroutines ...
	I0816 00:38:06.709582   78713 start.go:246] waiting for cluster config update ...
	I0816 00:38:06.709593   78713 start.go:255] writing updated cluster config ...
	I0816 00:38:06.709864   78713 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:06.755974   78713 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:06.757917   78713 out.go:177] * Done! kubectl is now configured to use "embed-certs-758469" cluster and "default" namespace by default
	I0816 00:38:03.959040   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:03.973674   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:03.973758   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:04.013606   79191 cri.go:89] found id: ""
	I0816 00:38:04.013653   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.013661   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:04.013667   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:04.013737   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:04.054558   79191 cri.go:89] found id: ""
	I0816 00:38:04.054590   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.054602   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:04.054609   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:04.054667   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:04.097116   79191 cri.go:89] found id: ""
	I0816 00:38:04.097143   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.097154   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:04.097162   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:04.097223   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:04.136770   79191 cri.go:89] found id: ""
	I0816 00:38:04.136798   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.136809   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:04.136816   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:04.136865   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:04.171906   79191 cri.go:89] found id: ""
	I0816 00:38:04.171929   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.171937   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:04.171943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:04.172004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:04.208694   79191 cri.go:89] found id: ""
	I0816 00:38:04.208725   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.208735   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:04.208744   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:04.208803   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:04.276713   79191 cri.go:89] found id: ""
	I0816 00:38:04.276744   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.276755   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:04.276763   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:04.276823   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:04.316646   79191 cri.go:89] found id: ""
	I0816 00:38:04.316669   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.316696   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:04.316707   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:04.316722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:04.329819   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:04.329864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:04.399032   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:04.399052   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:04.399080   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.487665   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:04.487698   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:04.530937   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.530962   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:02.496317   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:04.496477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:05.878535   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:08.377096   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:07.087584   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:07.102015   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:07.102086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:07.139530   79191 cri.go:89] found id: ""
	I0816 00:38:07.139559   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.139569   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:07.139577   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:07.139642   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:07.179630   79191 cri.go:89] found id: ""
	I0816 00:38:07.179659   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.179669   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:07.179675   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:07.179734   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:07.216407   79191 cri.go:89] found id: ""
	I0816 00:38:07.216435   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.216444   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:07.216449   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:07.216509   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:07.252511   79191 cri.go:89] found id: ""
	I0816 00:38:07.252536   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.252544   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:07.252551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:07.252613   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:07.288651   79191 cri.go:89] found id: ""
	I0816 00:38:07.288679   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.288689   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:07.288698   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:07.288757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:07.325910   79191 cri.go:89] found id: ""
	I0816 00:38:07.325963   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.325974   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:07.325982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:07.326046   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:07.362202   79191 cri.go:89] found id: ""
	I0816 00:38:07.362230   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.362244   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:07.362251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:07.362316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:07.405272   79191 cri.go:89] found id: ""
	I0816 00:38:07.405302   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.405313   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:07.405324   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:07.405339   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:07.461186   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:07.461222   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:07.475503   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:07.475544   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:07.555146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:07.555165   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:07.555179   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:07.635162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:07.635201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.174600   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:10.190418   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:10.190479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:10.251925   79191 cri.go:89] found id: ""
	I0816 00:38:10.251960   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.251969   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:10.251974   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:10.252027   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:10.289038   79191 cri.go:89] found id: ""
	I0816 00:38:10.289078   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.289088   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:10.289096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:10.289153   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:10.334562   79191 cri.go:89] found id: ""
	I0816 00:38:10.334591   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.334601   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:10.334609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:10.334669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:10.371971   79191 cri.go:89] found id: ""
	I0816 00:38:10.372000   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.372010   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:10.372018   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:10.372084   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:10.409654   79191 cri.go:89] found id: ""
	I0816 00:38:10.409685   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.409696   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:10.409703   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:10.409770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:10.446639   79191 cri.go:89] found id: ""
	I0816 00:38:10.446666   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.446675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:10.446683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:10.446750   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:10.483601   79191 cri.go:89] found id: ""
	I0816 00:38:10.483629   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.483641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:10.483648   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:10.483707   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:10.519640   79191 cri.go:89] found id: ""
	I0816 00:38:10.519670   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.519679   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:10.519690   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:10.519704   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:10.603281   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:10.603300   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:10.603311   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:10.689162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:10.689198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.730701   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:10.730724   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:10.780411   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:10.780441   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:06.997726   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:09.495539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.495753   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:10.876242   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.376332   78747 pod_ready.go:82] duration metric: took 4m0.006460655s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:38:11.376362   78747 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:38:11.376372   78747 pod_ready.go:39] duration metric: took 4m3.906659924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:38:11.376389   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:38:11.376416   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:11.376472   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:11.425716   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:11.425741   78747 cri.go:89] found id: ""
	I0816 00:38:11.425749   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:11.425804   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.431122   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:11.431195   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:11.468622   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:11.468647   78747 cri.go:89] found id: ""
	I0816 00:38:11.468657   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:11.468713   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.474270   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:11.474329   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:11.518448   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:11.518493   78747 cri.go:89] found id: ""
	I0816 00:38:11.518502   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:11.518569   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.524185   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:11.524242   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:11.561343   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:11.561367   78747 cri.go:89] found id: ""
	I0816 00:38:11.561374   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:11.561418   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.565918   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:11.565992   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:11.606010   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.606036   78747 cri.go:89] found id: ""
	I0816 00:38:11.606043   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:11.606097   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.610096   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:11.610166   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:11.646204   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:11.646229   78747 cri.go:89] found id: ""
	I0816 00:38:11.646238   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:11.646295   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.650405   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:11.650467   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:11.690407   78747 cri.go:89] found id: ""
	I0816 00:38:11.690436   78747 logs.go:276] 0 containers: []
	W0816 00:38:11.690446   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:11.690454   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:11.690510   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:11.736695   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:11.736722   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:11.736729   78747 cri.go:89] found id: ""
	I0816 00:38:11.736738   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:11.736803   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.741022   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.744983   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:11.745011   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.791452   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:11.791484   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:12.304425   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:12.304470   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:12.341318   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:12.341353   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:12.401425   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:12.401464   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:12.476598   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:12.476653   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:12.495594   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:12.495629   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:12.645961   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:12.645991   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:12.697058   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:12.697091   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:12.749085   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:12.749117   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:12.795786   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:12.795831   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:12.835928   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:12.835959   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:12.872495   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:12.872524   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:13.294689   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:13.308762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:13.308822   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:13.345973   79191 cri.go:89] found id: ""
	I0816 00:38:13.346004   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.346015   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:13.346022   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:13.346083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:13.382905   79191 cri.go:89] found id: ""
	I0816 00:38:13.382934   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.382945   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:13.382952   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:13.383001   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:13.417616   79191 cri.go:89] found id: ""
	I0816 00:38:13.417650   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.417662   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:13.417669   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:13.417739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:13.453314   79191 cri.go:89] found id: ""
	I0816 00:38:13.453350   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.453360   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:13.453368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:13.453435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:13.488507   79191 cri.go:89] found id: ""
	I0816 00:38:13.488536   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.488547   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:13.488555   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:13.488614   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:13.527064   79191 cri.go:89] found id: ""
	I0816 00:38:13.527095   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.527108   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:13.527116   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:13.527178   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:13.562838   79191 cri.go:89] found id: ""
	I0816 00:38:13.562867   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.562876   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:13.562882   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:13.562944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:13.598924   79191 cri.go:89] found id: ""
	I0816 00:38:13.598963   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.598974   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:13.598985   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:13.598999   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:13.651122   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:13.651156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:13.665255   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:13.665281   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:13.742117   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:13.742135   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:13.742148   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:13.824685   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:13.824719   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.366542   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:16.380855   79191 kubeadm.go:597] duration metric: took 4m3.665876253s to restartPrimaryControlPlane
	W0816 00:38:16.380919   79191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:38:16.380946   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:38:13.496702   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.996304   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.421355   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:15.437651   78747 api_server.go:72] duration metric: took 4m15.224557183s to wait for apiserver process to appear ...
	I0816 00:38:15.437677   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:38:15.437721   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:15.437782   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:15.473240   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:15.473265   78747 cri.go:89] found id: ""
	I0816 00:38:15.473273   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:15.473335   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.477666   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:15.477734   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:15.526073   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:15.526095   78747 cri.go:89] found id: ""
	I0816 00:38:15.526104   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:15.526165   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.530706   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:15.530775   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:15.571124   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:15.571149   78747 cri.go:89] found id: ""
	I0816 00:38:15.571159   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:15.571217   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.578613   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:15.578690   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:15.617432   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:15.617454   78747 cri.go:89] found id: ""
	I0816 00:38:15.617464   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:15.617529   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.621818   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:15.621899   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:15.658963   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:15.658981   78747 cri.go:89] found id: ""
	I0816 00:38:15.658988   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:15.659037   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.663170   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:15.663230   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:15.699297   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.699322   78747 cri.go:89] found id: ""
	I0816 00:38:15.699331   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:15.699388   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.704029   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:15.704085   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:15.742790   78747 cri.go:89] found id: ""
	I0816 00:38:15.742816   78747 logs.go:276] 0 containers: []
	W0816 00:38:15.742825   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:15.742830   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:15.742875   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:15.776898   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:15.776918   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:15.776922   78747 cri.go:89] found id: ""
	I0816 00:38:15.776945   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:15.777007   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.781511   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.785953   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:15.785981   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.840461   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:15.840498   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:16.320285   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:16.320323   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.362171   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:16.362200   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:16.444803   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:16.444834   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:16.461705   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:16.461732   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:16.576190   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:16.576220   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:16.626407   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:16.626449   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:16.673004   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:16.673036   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:16.724770   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:16.724797   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:16.764812   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:16.764838   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:16.804268   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:16.804300   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:16.841197   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:16.841221   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.380352   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:38:19.386760   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:38:19.387751   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:19.387773   78747 api_server.go:131] duration metric: took 3.950088801s to wait for apiserver health ...
	I0816 00:38:19.387781   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:19.387801   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:19.387843   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:19.429928   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:19.429952   78747 cri.go:89] found id: ""
	I0816 00:38:19.429961   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:19.430021   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.434822   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:19.434870   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:19.476789   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:19.476811   78747 cri.go:89] found id: ""
	I0816 00:38:19.476819   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:19.476869   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.481574   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:19.481640   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:19.528718   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:19.528742   78747 cri.go:89] found id: ""
	I0816 00:38:19.528750   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:19.528799   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.533391   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:19.533455   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:19.581356   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:19.581374   78747 cri.go:89] found id: ""
	I0816 00:38:19.581381   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:19.581427   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.585915   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:19.585977   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:19.623514   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:19.623544   78747 cri.go:89] found id: ""
	I0816 00:38:19.623552   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:19.623606   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.627652   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:19.627711   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:19.663933   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:19.663957   78747 cri.go:89] found id: ""
	I0816 00:38:19.663967   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:19.664032   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.668093   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:19.668162   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:19.707688   78747 cri.go:89] found id: ""
	I0816 00:38:19.707716   78747 logs.go:276] 0 containers: []
	W0816 00:38:19.707726   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:19.707741   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:19.707804   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:19.745900   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:19.745930   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.745935   78747 cri.go:89] found id: ""
	I0816 00:38:19.745944   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:19.746002   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.750934   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.755022   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:19.755044   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:19.807228   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:19.807257   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:19.918242   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:19.918274   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:21.772367   79191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.39139467s)
	I0816 00:38:21.772449   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:18.495150   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:20.995073   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:19.969165   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:19.969198   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:20.008945   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:20.008975   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:20.050080   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:20.050120   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:20.450059   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:20.450107   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:20.490694   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:20.490721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:20.532856   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:20.532890   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:20.609130   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:20.609178   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:20.624248   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:20.624279   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:20.675636   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:20.675669   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:20.716694   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:20.716721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:23.289748   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:23.289773   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.289778   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.289782   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.289786   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.289789   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.289792   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.289799   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.289814   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.289827   78747 system_pods.go:74] duration metric: took 3.902040304s to wait for pod list to return data ...
	I0816 00:38:23.289836   78747 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:23.293498   78747 default_sa.go:45] found service account: "default"
	I0816 00:38:23.293528   78747 default_sa.go:55] duration metric: took 3.671585ms for default service account to be created ...
	I0816 00:38:23.293539   78747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:23.298509   78747 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:23.298534   78747 system_pods.go:89] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.298540   78747 system_pods.go:89] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.298545   78747 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.298549   78747 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.298552   78747 system_pods.go:89] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.298556   78747 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.298561   78747 system_pods.go:89] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.298567   78747 system_pods.go:89] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.298576   78747 system_pods.go:126] duration metric: took 5.030455ms to wait for k8s-apps to be running ...
	I0816 00:38:23.298585   78747 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:23.298632   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:23.318383   78747 system_svc.go:56] duration metric: took 19.787836ms WaitForService to wait for kubelet
	I0816 00:38:23.318419   78747 kubeadm.go:582] duration metric: took 4m23.105331758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:23.318446   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:23.322398   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:23.322425   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:23.322436   78747 node_conditions.go:105] duration metric: took 3.985107ms to run NodePressure ...
	I0816 00:38:23.322447   78747 start.go:241] waiting for startup goroutines ...
	I0816 00:38:23.322454   78747 start.go:246] waiting for cluster config update ...
	I0816 00:38:23.322464   78747 start.go:255] writing updated cluster config ...
	I0816 00:38:23.322801   78747 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:23.374057   78747 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:23.376186   78747 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-616827" cluster and "default" namespace by default
	I0816 00:38:21.788969   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:38:21.800050   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:38:21.811193   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:38:21.811216   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:38:21.811260   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:38:21.821328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:38:21.821391   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:38:21.831777   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:38:21.841357   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:38:21.841424   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:38:21.851564   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.861262   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:38:21.861322   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.871929   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:38:21.881544   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:38:21.881595   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:38:21.891725   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:38:22.120640   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:38:22.997351   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:25.494851   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:27.494976   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:29.495248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:31.994586   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:33.995565   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:36.494547   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:38.495194   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:40.995653   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:42.996593   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:45.495409   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:47.496072   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:49.997645   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:52.496097   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:54.994390   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:56.995869   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:58.996230   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:01.495217   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:02.989403   78489 pod_ready.go:82] duration metric: took 4m0.001106911s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	E0816 00:39:02.989435   78489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 00:39:02.989456   78489 pod_ready.go:39] duration metric: took 4m14.547419665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:02.989488   78489 kubeadm.go:597] duration metric: took 4m21.799297957s to restartPrimaryControlPlane
	W0816 00:39:02.989550   78489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:39:02.989582   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:39:29.166109   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.176504479s)
	I0816 00:39:29.166193   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:29.188082   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:39:29.207577   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:39:29.230485   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:39:29.230510   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:39:29.230564   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:39:29.242106   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:39:29.242177   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:39:29.258756   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:39:29.272824   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:39:29.272896   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:39:29.285574   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.294909   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:39:29.294985   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.304843   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:39:29.315125   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:39:29.315173   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:39:29.325422   78489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:39:29.375775   78489 kubeadm.go:310] W0816 00:39:29.358885    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.376658   78489 kubeadm.go:310] W0816 00:39:29.359753    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.504337   78489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:39:38.219769   78489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 00:39:38.219865   78489 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:39:38.219968   78489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:39:38.220094   78489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:39:38.220215   78489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 00:39:38.220302   78489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:39:38.221971   78489 out.go:235]   - Generating certificates and keys ...
	I0816 00:39:38.222037   78489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:39:38.222119   78489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:39:38.222234   78489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:39:38.222316   78489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:39:38.222430   78489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:39:38.222509   78489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:39:38.222584   78489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:39:38.222684   78489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:39:38.222767   78489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:39:38.222831   78489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:39:38.222862   78489 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:39:38.222943   78489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:39:38.223035   78489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:39:38.223121   78489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 00:39:38.223212   78489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:39:38.223299   78489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:39:38.223355   78489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:39:38.223452   78489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:39:38.223534   78489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:39:38.225012   78489 out.go:235]   - Booting up control plane ...
	I0816 00:39:38.225086   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:39:38.225153   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:39:38.225211   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:39:38.225296   78489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:39:38.225366   78489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:39:38.225399   78489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:39:38.225542   78489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 00:39:38.225706   78489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 00:39:38.225803   78489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001324649s
	I0816 00:39:38.225917   78489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 00:39:38.226004   78489 kubeadm.go:310] [api-check] The API server is healthy after 5.001672205s
	I0816 00:39:38.226125   78489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 00:39:38.226267   78489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 00:39:38.226352   78489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 00:39:38.226537   78489 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-819398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 00:39:38.226620   78489 kubeadm.go:310] [bootstrap-token] Using token: 4qqrpj.xeaneqftblh8gcp3
	I0816 00:39:38.227962   78489 out.go:235]   - Configuring RBAC rules ...
	I0816 00:39:38.228060   78489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 00:39:38.228140   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 00:39:38.228290   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 00:39:38.228437   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 00:39:38.228558   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 00:39:38.228697   78489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 00:39:38.228877   78489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 00:39:38.228942   78489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 00:39:38.229000   78489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 00:39:38.229010   78489 kubeadm.go:310] 
	I0816 00:39:38.229086   78489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 00:39:38.229096   78489 kubeadm.go:310] 
	I0816 00:39:38.229160   78489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 00:39:38.229166   78489 kubeadm.go:310] 
	I0816 00:39:38.229186   78489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 00:39:38.229252   78489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 00:39:38.229306   78489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 00:39:38.229312   78489 kubeadm.go:310] 
	I0816 00:39:38.229361   78489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 00:39:38.229367   78489 kubeadm.go:310] 
	I0816 00:39:38.229403   78489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 00:39:38.229408   78489 kubeadm.go:310] 
	I0816 00:39:38.229447   78489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 00:39:38.229504   78489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 00:39:38.229562   78489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 00:39:38.229567   78489 kubeadm.go:310] 
	I0816 00:39:38.229636   78489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 00:39:38.229701   78489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 00:39:38.229707   78489 kubeadm.go:310] 
	I0816 00:39:38.229793   78489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.229925   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0816 00:39:38.229954   78489 kubeadm.go:310] 	--control-plane 
	I0816 00:39:38.229960   78489 kubeadm.go:310] 
	I0816 00:39:38.230029   78489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 00:39:38.230038   78489 kubeadm.go:310] 
	I0816 00:39:38.230109   78489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.230211   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0816 00:39:38.230223   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:39:38.230232   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:39:38.231742   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:39:38.233079   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:39:38.245435   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:39:38.269502   78489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:39:38.269566   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.269593   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-819398 minikube.k8s.io/updated_at=2024_08_16T00_39_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=no-preload-819398 minikube.k8s.io/primary=true
	I0816 00:39:38.304272   78489 ops.go:34] apiserver oom_adj: -16
	I0816 00:39:38.485643   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.986569   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.486177   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.985737   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.486311   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.985981   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.486071   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.986414   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.486292   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.603092   78489 kubeadm.go:1113] duration metric: took 4.333590575s to wait for elevateKubeSystemPrivileges
	I0816 00:39:42.603133   78489 kubeadm.go:394] duration metric: took 5m1.4690157s to StartCluster
	I0816 00:39:42.603158   78489 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.603258   78489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:39:42.604833   78489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.605072   78489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:39:42.605133   78489 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:39:42.605219   78489 addons.go:69] Setting storage-provisioner=true in profile "no-preload-819398"
	I0816 00:39:42.605254   78489 addons.go:234] Setting addon storage-provisioner=true in "no-preload-819398"
	I0816 00:39:42.605251   78489 addons.go:69] Setting default-storageclass=true in profile "no-preload-819398"
	I0816 00:39:42.605259   78489 addons.go:69] Setting metrics-server=true in profile "no-preload-819398"
	I0816 00:39:42.605295   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:39:42.605308   78489 addons.go:234] Setting addon metrics-server=true in "no-preload-819398"
	I0816 00:39:42.605309   78489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-819398"
	W0816 00:39:42.605320   78489 addons.go:243] addon metrics-server should already be in state true
	W0816 00:39:42.605266   78489 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:39:42.605355   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605370   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605697   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605717   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605731   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605735   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605777   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605837   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.606458   78489 out.go:177] * Verifying Kubernetes components...
	I0816 00:39:42.607740   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:39:42.622512   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0816 00:39:42.623130   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.623697   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.623720   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.624070   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.624666   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.624695   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.626221   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0816 00:39:42.626220   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0816 00:39:42.626608   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.626695   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.627158   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627179   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627329   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627346   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627490   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.627696   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.628049   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.628165   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.628189   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.632500   78489 addons.go:234] Setting addon default-storageclass=true in "no-preload-819398"
	W0816 00:39:42.632523   78489 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:39:42.632554   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.632897   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.632928   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.644779   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0816 00:39:42.645422   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.645995   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.646026   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.646395   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.646607   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.646960   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0816 00:39:42.647374   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.648126   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.648141   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.648471   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.649494   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.649732   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.651509   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.651600   78489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:39:42.652823   78489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:39:42.652936   78489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:42.652951   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:39:42.652970   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654197   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:39:42.654217   78489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:39:42.654234   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654380   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0816 00:39:42.654812   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.655316   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.655332   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.655784   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.656330   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.656356   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.659148   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659319   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659629   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659648   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659776   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659794   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659959   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660138   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660164   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660330   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660444   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660478   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660587   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.660583   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.674431   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0816 00:39:42.674827   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.675399   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.675420   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.675756   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.675993   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.677956   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.678195   78489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:42.678211   78489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:39:42.678230   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.681163   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681593   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.681615   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681916   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.682099   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.682197   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.682276   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.822056   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:39:42.840356   78489 node_ready.go:35] waiting up to 6m0s for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852864   78489 node_ready.go:49] node "no-preload-819398" has status "Ready":"True"
	I0816 00:39:42.852887   78489 node_ready.go:38] duration metric: took 12.497677ms for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852899   78489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:42.866637   78489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:42.908814   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:39:42.908832   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:39:42.949047   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:39:42.949070   78489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:39:42.959159   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:43.021536   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.021557   78489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:39:43.068214   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:43.082144   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.243834   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.243857   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244177   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244192   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.244201   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.244212   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244451   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244505   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.250358   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.250376   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.250608   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:43.250648   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.250656   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419115   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.350866587s)
	I0816 00:39:44.419166   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419175   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419519   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419545   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419542   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419561   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419573   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419824   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419836   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419851   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.436623   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.354435707s)
	I0816 00:39:44.436682   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.436697   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437131   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437150   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437160   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.437169   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437207   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.437495   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437517   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437528   78489 addons.go:475] Verifying addon metrics-server=true in "no-preload-819398"
	I0816 00:39:44.439622   78489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 00:39:44.441097   78489 addons.go:510] duration metric: took 1.835961958s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 00:39:44.878479   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:47.373009   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:49.380832   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:50.372883   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.372919   78489 pod_ready.go:82] duration metric: took 7.506242182s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.372933   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378463   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.378486   78489 pod_ready.go:82] duration metric: took 5.546402ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378496   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383347   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.383364   78489 pod_ready.go:82] duration metric: took 4.862995ms for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383374   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387672   78489 pod_ready.go:93] pod "kube-proxy-nl7g6" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.387693   78489 pod_ready.go:82] duration metric: took 4.312811ms for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387703   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391921   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.391939   78489 pod_ready.go:82] duration metric: took 4.229092ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391945   78489 pod_ready.go:39] duration metric: took 7.539034647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:50.391958   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:39:50.392005   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:39:50.407980   78489 api_server.go:72] duration metric: took 7.802877941s to wait for apiserver process to appear ...
	I0816 00:39:50.408017   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:39:50.408039   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:39:50.412234   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:39:50.413278   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:39:50.413297   78489 api_server.go:131] duration metric: took 5.273051ms to wait for apiserver health ...
	I0816 00:39:50.413304   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:39:50.573185   78489 system_pods.go:59] 9 kube-system pods found
	I0816 00:39:50.573226   78489 system_pods.go:61] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.573233   78489 system_pods.go:61] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.573239   78489 system_pods.go:61] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.573244   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.573250   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.573257   78489 system_pods.go:61] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.573262   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.573271   78489 system_pods.go:61] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.573278   78489 system_pods.go:61] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.573288   78489 system_pods.go:74] duration metric: took 159.97729ms to wait for pod list to return data ...
	I0816 00:39:50.573301   78489 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:39:50.771164   78489 default_sa.go:45] found service account: "default"
	I0816 00:39:50.771189   78489 default_sa.go:55] duration metric: took 197.881739ms for default service account to be created ...
	I0816 00:39:50.771198   78489 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:39:50.973415   78489 system_pods.go:86] 9 kube-system pods found
	I0816 00:39:50.973448   78489 system_pods.go:89] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.973453   78489 system_pods.go:89] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.973457   78489 system_pods.go:89] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.973461   78489 system_pods.go:89] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.973465   78489 system_pods.go:89] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.973468   78489 system_pods.go:89] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.973471   78489 system_pods.go:89] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.973477   78489 system_pods.go:89] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.973482   78489 system_pods.go:89] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.973491   78489 system_pods.go:126] duration metric: took 202.288008ms to wait for k8s-apps to be running ...
	I0816 00:39:50.973498   78489 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:39:50.973539   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:50.989562   78489 system_svc.go:56] duration metric: took 16.053781ms WaitForService to wait for kubelet
	I0816 00:39:50.989595   78489 kubeadm.go:582] duration metric: took 8.384495377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:39:50.989618   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:39:51.171076   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:39:51.171109   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:39:51.171120   78489 node_conditions.go:105] duration metric: took 181.496732ms to run NodePressure ...
	I0816 00:39:51.171134   78489 start.go:241] waiting for startup goroutines ...
	I0816 00:39:51.171144   78489 start.go:246] waiting for cluster config update ...
	I0816 00:39:51.171157   78489 start.go:255] writing updated cluster config ...
	I0816 00:39:51.171465   78489 ssh_runner.go:195] Run: rm -f paused
	I0816 00:39:51.220535   78489 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:39:51.223233   78489 out.go:177] * Done! kubectl is now configured to use "no-preload-819398" cluster and "default" namespace by default
	I0816 00:40:18.143220   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:40:18.143333   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:40:18.144757   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.144804   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.144888   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.145018   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.145134   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:18.145210   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:18.146791   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:18.146879   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:18.146965   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:18.147072   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:18.147164   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:18.147258   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:18.147340   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:18.147434   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:18.147525   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:18.147613   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:18.147708   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:18.147744   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:18.147791   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:18.147839   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:18.147916   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:18.147989   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:18.148045   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:18.148194   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:18.148318   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:18.148365   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:18.148458   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:18.149817   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:18.149941   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:18.150044   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:18.150107   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:18.150187   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:18.150323   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:40:18.150380   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:40:18.150460   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150671   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.150766   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150953   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151033   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151232   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151305   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151520   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151614   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151840   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151856   79191 kubeadm.go:310] 
	I0816 00:40:18.151917   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:40:18.151978   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:40:18.151992   79191 kubeadm.go:310] 
	I0816 00:40:18.152046   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:40:18.152097   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:40:18.152204   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:40:18.152218   79191 kubeadm.go:310] 
	I0816 00:40:18.152314   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:40:18.152349   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:40:18.152377   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:40:18.152384   79191 kubeadm.go:310] 
	I0816 00:40:18.152466   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:40:18.152537   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:40:18.152543   79191 kubeadm.go:310] 
	I0816 00:40:18.152674   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:40:18.152769   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:40:18.152853   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:40:18.152914   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:40:18.152978   79191 kubeadm.go:310] 
	W0816 00:40:18.153019   79191 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:40:18.153055   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:40:18.634058   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:40:18.648776   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:40:18.659504   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:40:18.659529   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:40:18.659584   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:40:18.670234   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:40:18.670285   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:40:18.680370   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:40:18.689496   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:40:18.689557   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:40:18.698949   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.708056   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:40:18.708118   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.718261   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:40:18.728708   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:40:18.728777   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:40:18.739253   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:40:18.819666   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.819746   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.966568   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.966704   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.966868   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:19.168323   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:19.170213   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:19.170335   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:19.170464   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:19.170546   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:19.170598   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:19.170670   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:19.170740   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:19.170828   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:19.170924   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:19.171031   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:19.171129   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:19.171179   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:19.171261   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:19.421256   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:19.585260   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:19.672935   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:19.928620   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:19.952420   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:19.953527   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:19.953578   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:20.090384   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:20.092904   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:20.093037   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:20.105743   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:20.106980   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:20.108199   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:20.111014   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:41:00.113053   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:41:00.113479   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:00.113752   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:05.113795   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:05.114091   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:15.114695   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:15.114932   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:35.116019   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:35.116207   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.116728   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:42:15.116994   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.117018   79191 kubeadm.go:310] 
	I0816 00:42:15.117071   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:42:15.117136   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:42:15.117147   79191 kubeadm.go:310] 
	I0816 00:42:15.117198   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:42:15.117248   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:42:15.117402   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:42:15.117412   79191 kubeadm.go:310] 
	I0816 00:42:15.117543   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:42:15.117601   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:42:15.117636   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:42:15.117644   79191 kubeadm.go:310] 
	I0816 00:42:15.117778   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:42:15.117918   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:42:15.117929   79191 kubeadm.go:310] 
	I0816 00:42:15.118083   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:42:15.118215   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:42:15.118313   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:42:15.118412   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:42:15.118433   79191 kubeadm.go:310] 
	I0816 00:42:15.118582   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:42:15.118698   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:42:15.118843   79191 kubeadm.go:394] duration metric: took 8m2.460648867s to StartCluster
	I0816 00:42:15.118855   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:42:15.118891   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:42:15.118957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:42:15.162809   79191 cri.go:89] found id: ""
	I0816 00:42:15.162837   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.162848   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:42:15.162855   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:42:15.162925   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:42:15.198020   79191 cri.go:89] found id: ""
	I0816 00:42:15.198042   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.198053   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:42:15.198063   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:42:15.198132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:42:15.238168   79191 cri.go:89] found id: ""
	I0816 00:42:15.238197   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.238206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:42:15.238213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:42:15.238273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:42:15.278364   79191 cri.go:89] found id: ""
	I0816 00:42:15.278391   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.278401   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:42:15.278407   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:42:15.278465   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:42:15.316182   79191 cri.go:89] found id: ""
	I0816 00:42:15.316209   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.316216   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:42:15.316222   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:42:15.316278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:42:15.352934   79191 cri.go:89] found id: ""
	I0816 00:42:15.352962   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.352970   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:42:15.352976   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:42:15.353031   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:42:15.388940   79191 cri.go:89] found id: ""
	I0816 00:42:15.388966   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.388973   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:42:15.388983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:42:15.389042   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:42:15.424006   79191 cri.go:89] found id: ""
	I0816 00:42:15.424035   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.424043   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:42:15.424054   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:42:15.424073   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:42:15.504823   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:42:15.504846   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:42:15.504858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:42:15.608927   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:42:15.608959   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:42:15.676785   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:42:15.676810   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:42:15.744763   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:42:15.744805   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0816 00:42:15.760944   79191 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:42:15.761012   79191 out.go:270] * 
	W0816 00:42:15.761078   79191 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.761098   79191 out.go:270] * 
	W0816 00:42:15.762220   79191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:42:15.765697   79191 out.go:201] 
	W0816 00:42:15.766942   79191 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.767018   79191 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:42:15.767040   79191 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:42:15.768526   79191 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.799453987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769228799427948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc6853dd-2a7c-45f8-92a5-cdae72f27772 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.800143239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88a5b7cc-2b87-4ac5-b0fe-e323dbc0cf6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.800215206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88a5b7cc-2b87-4ac5-b0fe-e323dbc0cf6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.800418034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768447843336160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901436142b66005d7e7eeec98b2fd068f1d3c25b0fd7ac6ead4d82f112ac935a,PodSandboxId:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768425903172036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5,PodSandboxId:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768424751583816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768417007007341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110,PodSandboxId:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768416996249241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13a
b06,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3,PodSandboxId:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768413361960077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a,PodSandboxId:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768413269749088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2,PodSandboxId:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768413335009325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6,PodSandboxId:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768413231619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88a5b7cc-2b87-4ac5-b0fe-e323dbc0cf6f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.842476657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b019429a-de70-49c9-be8d-df808710c870 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.842573211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b019429a-de70-49c9-be8d-df808710c870 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.844071310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11b9b445-4535-4d06-affa-5eb7d1e57e38 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.844647709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769228844620587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11b9b445-4535-4d06-affa-5eb7d1e57e38 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.845372058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a801a60a-c6d3-43f6-89c7-c16954065a46 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.845443246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a801a60a-c6d3-43f6-89c7-c16954065a46 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.845635710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768447843336160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901436142b66005d7e7eeec98b2fd068f1d3c25b0fd7ac6ead4d82f112ac935a,PodSandboxId:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768425903172036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5,PodSandboxId:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768424751583816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768417007007341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110,PodSandboxId:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768416996249241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13a
b06,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3,PodSandboxId:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768413361960077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a,PodSandboxId:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768413269749088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2,PodSandboxId:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768413335009325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6,PodSandboxId:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768413231619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a801a60a-c6d3-43f6-89c7-c16954065a46 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.886194106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94b0f358-9c6f-45ce-b03b-a5f8693931f8 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.886311929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94b0f358-9c6f-45ce-b03b-a5f8693931f8 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.887534363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=141e8e1e-6146-4435-806e-8006eed4f6b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.888159986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769228888124043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=141e8e1e-6146-4435-806e-8006eed4f6b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.888766507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2da8514-3187-4837-9ace-c21b941ec153 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.888823086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2da8514-3187-4837-9ace-c21b941ec153 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.890217072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768447843336160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901436142b66005d7e7eeec98b2fd068f1d3c25b0fd7ac6ead4d82f112ac935a,PodSandboxId:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768425903172036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5,PodSandboxId:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768424751583816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768417007007341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110,PodSandboxId:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768416996249241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13a
b06,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3,PodSandboxId:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768413361960077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a,PodSandboxId:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768413269749088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2,PodSandboxId:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768413335009325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6,PodSandboxId:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768413231619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2da8514-3187-4837-9ace-c21b941ec153 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.927556983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db80209b-a8b6-4c05-9c39-927c393bfa6e name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.927875123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db80209b-a8b6-4c05-9c39-927c393bfa6e name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.929289413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fb3739e-9caa-4328-9471-89dfc4c480ea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.929684936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769228929661706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fb3739e-9caa-4328-9471-89dfc4c480ea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.930500487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efac70e0-9a9f-44a4-8a05-697b10cff3b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.930553814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efac70e0-9a9f-44a4-8a05-697b10cff3b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:08 embed-certs-758469 crio[728]: time="2024-08-16 00:47:08.930735879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768447843336160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901436142b66005d7e7eeec98b2fd068f1d3c25b0fd7ac6ead4d82f112ac935a,PodSandboxId:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768425903172036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5,PodSandboxId:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768424751583816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768417007007341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110,PodSandboxId:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768416996249241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13a
b06,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3,PodSandboxId:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768413361960077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a,PodSandboxId:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768413269749088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2,PodSandboxId:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768413335009325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6,PodSandboxId:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768413231619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efac70e0-9a9f-44a4-8a05-697b10cff3b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2ba9e1d7af63a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   9fda6f0a2567d       storage-provisioner
	901436142b660       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   342f73cb40d64       busybox
	8ecab8c44d72a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   173fab85479db       coredns-6f6b679f8f-54gqb
	a14a1aef37ee3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   9fda6f0a2567d       storage-provisioner
	513d50297bc22       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   af1a2b4ddcaab       kube-proxy-4xc89
	dcadfb0e98975       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   f3769b8ad536e       kube-scheduler-embed-certs-758469
	2cc2751644145       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   ea1c3acb4de0e       kube-controller-manager-embed-certs-758469
	a23eed518f172       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   5ee83674c575a       etcd-embed-certs-758469
	a17b85fff4759       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   1e623530187b4       kube-apiserver-embed-certs-758469
	
	
	==> coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60039 - 64859 "HINFO IN 4609580037883277511.2890640239383133867. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010570491s
	
	
	==> describe nodes <==
	Name:               embed-certs-758469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-758469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=embed-certs-758469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T00_25_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 00:25:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-758469
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:47:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 00:44:19 +0000   Fri, 16 Aug 2024 00:25:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 00:44:19 +0000   Fri, 16 Aug 2024 00:25:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 00:44:19 +0000   Fri, 16 Aug 2024 00:25:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 00:44:19 +0000   Fri, 16 Aug 2024 00:33:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    embed-certs-758469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3465190e779743bea5b334f70d6b0148
	  System UUID:                3465190e-7797-43be-a5b3-34f70d6b0148
	  Boot ID:                    b88915e2-7fd1-43d6-ad03-378a0e00fe29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-6f6b679f8f-54gqb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-758469                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-758469             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-758469    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-4xc89                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-758469             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-pnmsm               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-758469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-758469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-758469 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-758469 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-758469 event: Registered Node embed-certs-758469 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-758469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-758469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-758469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-758469 event: Registered Node embed-certs-758469 in Controller
	
	
	==> dmesg <==
	[Aug16 00:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050703] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039351] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.791649] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.495460] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613096] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.316444] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.054722] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061271] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.165924] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.158384] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.306593] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.266557] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.061482] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.321338] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +4.593739] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.428528] systemd-fstab-generator[1558]: Ignoring "noauto" option for root device
	[  +1.312171] kauditd_printk_skb: 64 callbacks suppressed
	[ +11.806439] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] <==
	{"level":"info","ts":"2024-08-16T00:33:33.885526Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T00:33:35.109111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-16T00:33:35.109221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T00:33:35.109283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-08-16T00:33:35.109331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:35.109356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:35.109383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:35.109408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:35.110970Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:embed-certs-758469 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T00:33:35.111010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:33:35.111091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:33:35.111600Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T00:33:35.111653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T00:33:35.112836Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:33:35.113926Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-08-16T00:33:35.112884Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:33:35.115414Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-16T00:33:51.572570Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.556854ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814266034637402784 > lease_revoke:<id:192d915892f7e604>","response":"size:29"}
	{"level":"warn","ts":"2024-08-16T00:33:51.773676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.941655ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814266034637402785 > lease_revoke:<id:192d915892f7e5aa>","response":"size:29"}
	{"level":"info","ts":"2024-08-16T00:33:51.773750Z","caller":"traceutil/trace.go:171","msg":"trace[573502085] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:641; }","duration":"266.568829ms","start":"2024-08-16T00:33:51.507171Z","end":"2024-08-16T00:33:51.773740Z","steps":["trace[573502085] 'read index received'  (duration: 22.312µs)","trace[573502085] 'applied index is now lower than readState.Index'  (duration: 266.545686ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T00:33:51.773941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.710804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-6f6b679f8f-54gqb\" ","response":"range_response_count:1 size:5042"}
	{"level":"info","ts":"2024-08-16T00:33:51.773976Z","caller":"traceutil/trace.go:171","msg":"trace[428233582] range","detail":"{range_begin:/registry/pods/kube-system/coredns-6f6b679f8f-54gqb; range_end:; response_count:1; response_revision:602; }","duration":"266.806588ms","start":"2024-08-16T00:33:51.507163Z","end":"2024-08-16T00:33:51.773970Z","steps":["trace[428233582] 'agreement among raft nodes before linearized reading'  (duration: 266.604231ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T00:43:35.143405Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":851}
	{"level":"info","ts":"2024-08-16T00:43:35.154077Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":851,"took":"9.753016ms","hash":2822274117,"current-db-size-bytes":2539520,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2539520,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-08-16T00:43:35.154220Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2822274117,"revision":851,"compact-revision":-1}
	
	
	==> kernel <==
	 00:47:09 up 14 min,  0 users,  load average: 0.14, 0.13, 0.10
	Linux embed-certs-758469 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] <==
	W0816 00:43:37.397081       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:43:37.397172       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:43:37.398142       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:43:37.398230       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:44:37.398791       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 00:44:37.399022       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:44:37.399216       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0816 00:44:37.399231       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:44:37.400418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:44:37.400447       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:46:37.401245       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 00:46:37.401283       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:46:37.401635       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0816 00:46:37.401652       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 00:46:37.402860       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:46:37.402943       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] <==
	E0816 00:41:42.023686       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:41:42.488801       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:42:12.029995       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:42:12.498476       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:42:42.037071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:42:42.507667       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:43:12.044219       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:43:12.516865       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:43:42.050363       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:43:42.525425       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:44:12.056522       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:44:12.533629       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:44:19.105399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-758469"
	I0816 00:44:36.654796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="279.287µs"
	E0816 00:44:42.063353       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:44:42.541558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:44:50.652216       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="136.88µs"
	E0816 00:45:12.071158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:45:12.553276       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:45:42.077283       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:45:42.562302       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:46:12.083585       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:46:12.572217       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:46:42.090091       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:46:42.582269       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 00:33:37.223147       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 00:33:37.234730       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E0816 00:33:37.234867       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 00:33:37.270876       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 00:33:37.271016       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 00:33:37.271116       1 server_linux.go:169] "Using iptables Proxier"
	I0816 00:33:37.273803       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 00:33:37.274224       1 server.go:483] "Version info" version="v1.31.0"
	I0816 00:33:37.274255       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:33:37.276110       1 config.go:197] "Starting service config controller"
	I0816 00:33:37.276170       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 00:33:37.276193       1 config.go:104] "Starting endpoint slice config controller"
	I0816 00:33:37.276196       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 00:33:37.278319       1 config.go:326] "Starting node config controller"
	I0816 00:33:37.278433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 00:33:37.376833       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 00:33:37.377030       1 shared_informer.go:320] Caches are synced for service config
	I0816 00:33:37.379256       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] <==
	I0816 00:33:34.371337       1 serving.go:386] Generated self-signed cert in-memory
	W0816 00:33:36.342194       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 00:33:36.342285       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 00:33:36.342295       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 00:33:36.342356       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 00:33:36.414680       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 00:33:36.414943       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:33:36.417411       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 00:33:36.417618       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 00:33:36.417545       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 00:33:36.421995       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 00:33:36.522962       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 00:45:55 embed-certs-758469 kubelet[937]: E0816 00:45:55.631559     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:46:02 embed-certs-758469 kubelet[937]: E0816 00:46:02.791675     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769162791441212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:02 embed-certs-758469 kubelet[937]: E0816 00:46:02.791699     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769162791441212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:09 embed-certs-758469 kubelet[937]: E0816 00:46:09.631309     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:46:12 embed-certs-758469 kubelet[937]: E0816 00:46:12.793197     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769172792565474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:12 embed-certs-758469 kubelet[937]: E0816 00:46:12.793271     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769172792565474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:21 embed-certs-758469 kubelet[937]: E0816 00:46:21.632084     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:46:22 embed-certs-758469 kubelet[937]: E0816 00:46:22.795568     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769182794868914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:22 embed-certs-758469 kubelet[937]: E0816 00:46:22.796016     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769182794868914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:32 embed-certs-758469 kubelet[937]: E0816 00:46:32.635744     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:46:32 embed-certs-758469 kubelet[937]: E0816 00:46:32.650533     937 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 00:46:32 embed-certs-758469 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 00:46:32 embed-certs-758469 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 00:46:32 embed-certs-758469 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 00:46:32 embed-certs-758469 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 00:46:32 embed-certs-758469 kubelet[937]: E0816 00:46:32.797312     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769192796832627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:32 embed-certs-758469 kubelet[937]: E0816 00:46:32.797339     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769192796832627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:42 embed-certs-758469 kubelet[937]: E0816 00:46:42.799679     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769202799333150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:42 embed-certs-758469 kubelet[937]: E0816 00:46:42.799735     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769202799333150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:46 embed-certs-758469 kubelet[937]: E0816 00:46:46.634636     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:46:52 embed-certs-758469 kubelet[937]: E0816 00:46:52.801975     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769212801703961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:52 embed-certs-758469 kubelet[937]: E0816 00:46:52.802007     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769212801703961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:58 embed-certs-758469 kubelet[937]: E0816 00:46:58.632545     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:47:02 embed-certs-758469 kubelet[937]: E0816 00:47:02.804114     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769222803758816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:02 embed-certs-758469 kubelet[937]: E0816 00:47:02.804415     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769222803758816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] <==
	I0816 00:34:07.988180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 00:34:08.002401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 00:34:08.002641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 00:34:25.422064       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 00:34:25.422685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-758469_eca3381c-6415-4fc5-9e7e-a8c2568ab38e!
	I0816 00:34:25.422327       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdee9e7c-b24b-41ee-a3da-288faf7470a2", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-758469_eca3381c-6415-4fc5-9e7e-a8c2568ab38e became leader
	I0816 00:34:25.526205       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-758469_eca3381c-6415-4fc5-9e7e-a8c2568ab38e!
	
	
	==> storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] <==
	I0816 00:33:37.151572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 00:34:07.154091       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-758469 -n embed-certs-758469
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-758469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-pnmsm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-758469 describe pod metrics-server-6867b74b74-pnmsm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-758469 describe pod metrics-server-6867b74b74-pnmsm: exit status 1 (71.85527ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-pnmsm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-758469 describe pod metrics-server-6867b74b74-pnmsm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0816 00:38:37.364947   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:38:43.006930   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:39:21.519723   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 00:47:23.914483553 +0000 UTC m=+6118.495062531
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-616827 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-616827 logs -n 25: (2.175680379s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-697641 sudo cat                              | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo find                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo crio                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-697641                                       | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-067133 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | disable-driver-mounts-067133                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:25 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-819398             | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC | 16 Aug 24 00:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-758469            | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-616827  | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:29:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:29:51.785297   79191 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:29:51.785388   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785392   79191 out.go:358] Setting ErrFile to fd 2...
	I0816 00:29:51.785396   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785578   79191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:29:51.786145   79191 out.go:352] Setting JSON to false
	I0816 00:29:51.787066   79191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7892,"bootTime":1723760300,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:29:51.787122   79191 start.go:139] virtualization: kvm guest
	I0816 00:29:51.789057   79191 out.go:177] * [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:29:51.790274   79191 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:29:51.790269   79191 notify.go:220] Checking for updates...
	I0816 00:29:51.792828   79191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:29:51.794216   79191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:29:51.795553   79191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:29:51.796761   79191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:29:51.798018   79191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:29:51.799561   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:29:51.799935   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.799990   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.814617   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0816 00:29:51.815056   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.815584   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.815606   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.815933   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.816131   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:51.817809   79191 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 00:29:51.819204   79191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:29:51.819604   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.819652   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.834270   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0816 00:29:51.834584   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.834992   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.835015   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.835303   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.835478   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:49.226097   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:51.870472   79191 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:29:51.872031   79191 start.go:297] selected driver: kvm2
	I0816 00:29:51.872049   79191 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.872137   79191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:29:51.872785   79191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.872848   79191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:29:51.887731   79191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:29:51.888078   79191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:29:51.888141   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:29:51.888154   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:29:51.888203   79191 start.go:340] cluster config:
	{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.888300   79191 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.890190   79191 out.go:177] * Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	I0816 00:29:51.891529   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:29:51.891557   79191 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:29:51.891565   79191 cache.go:56] Caching tarball of preloaded images
	I0816 00:29:51.891645   79191 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:29:51.891664   79191 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:29:51.891747   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:29:51.891915   79191 start.go:360] acquireMachinesLock for old-k8s-version-098619: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:29:55.306158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:58.378266   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:04.458137   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:07.530158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:13.610160   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:16.682057   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:22.762088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:25.834157   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:31.914106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:34.986091   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:41.066143   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:44.138152   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:50.218140   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:53.290166   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:59.370080   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:02.442130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:08.522126   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:11.594144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:17.674104   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:20.746185   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:26.826131   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:29.898113   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:35.978100   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:39.050136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:45.130120   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:48.202078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:54.282078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:57.354088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:03.434136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:06.506153   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:12.586125   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:15.658144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:21.738130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:24.810191   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:30.890130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:33.962132   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:40.042062   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:43.114154   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:49.194151   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:52.266130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:58.346106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:01.418139   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:04.422042   78713 start.go:364] duration metric: took 4m25.166768519s to acquireMachinesLock for "embed-certs-758469"
	I0816 00:33:04.422099   78713 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:04.422107   78713 fix.go:54] fixHost starting: 
	I0816 00:33:04.422426   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:04.422458   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:04.437335   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I0816 00:33:04.437779   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:04.438284   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:04.438306   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:04.438646   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:04.438873   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:04.439045   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:04.440597   78713 fix.go:112] recreateIfNeeded on embed-certs-758469: state=Stopped err=<nil>
	I0816 00:33:04.440627   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	W0816 00:33:04.440781   78713 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:04.442527   78713 out.go:177] * Restarting existing kvm2 VM for "embed-certs-758469" ...
	I0816 00:33:04.419735   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:04.419772   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420077   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:33:04.420102   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420299   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:33:04.421914   78489 machine.go:96] duration metric: took 4m37.429789672s to provisionDockerMachine
	I0816 00:33:04.421957   78489 fix.go:56] duration metric: took 4m37.451098771s for fixHost
	I0816 00:33:04.421965   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 4m37.451130669s
	W0816 00:33:04.421995   78489 start.go:714] error starting host: provision: host is not running
	W0816 00:33:04.422099   78489 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 00:33:04.422111   78489 start.go:729] Will try again in 5 seconds ...
	I0816 00:33:04.443838   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Start
	I0816 00:33:04.444035   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring networks are active...
	I0816 00:33:04.444849   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network default is active
	I0816 00:33:04.445168   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network mk-embed-certs-758469 is active
	I0816 00:33:04.445491   78713 main.go:141] libmachine: (embed-certs-758469) Getting domain xml...
	I0816 00:33:04.446159   78713 main.go:141] libmachine: (embed-certs-758469) Creating domain...
	I0816 00:33:05.654817   78713 main.go:141] libmachine: (embed-certs-758469) Waiting to get IP...
	I0816 00:33:05.655625   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.656020   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.656064   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.655983   79868 retry.go:31] will retry after 273.341379ms: waiting for machine to come up
	I0816 00:33:05.930542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.931038   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.931061   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.931001   79868 retry.go:31] will retry after 320.172619ms: waiting for machine to come up
	I0816 00:33:06.252718   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.253117   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.253140   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.253091   79868 retry.go:31] will retry after 441.386495ms: waiting for machine to come up
	I0816 00:33:06.695681   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.696108   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.696134   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.696065   79868 retry.go:31] will retry after 491.272986ms: waiting for machine to come up
	I0816 00:33:07.188683   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.189070   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.189092   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.189025   79868 retry.go:31] will retry after 536.865216ms: waiting for machine to come up
	I0816 00:33:07.727831   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.728246   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.728276   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.728193   79868 retry.go:31] will retry after 813.064342ms: waiting for machine to come up
	I0816 00:33:08.543096   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:08.543605   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:08.543637   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:08.543549   79868 retry.go:31] will retry after 1.00495091s: waiting for machine to come up
	I0816 00:33:09.424586   78489 start.go:360] acquireMachinesLock for no-preload-819398: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:33:09.549815   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:09.550226   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:09.550255   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:09.550175   79868 retry.go:31] will retry after 1.483015511s: waiting for machine to come up
	I0816 00:33:11.034879   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:11.035277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:11.035315   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:11.035224   79868 retry.go:31] will retry after 1.513237522s: waiting for machine to come up
	I0816 00:33:12.550817   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:12.551172   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:12.551196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:12.551126   79868 retry.go:31] will retry after 1.483165174s: waiting for machine to come up
	I0816 00:33:14.036748   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:14.037142   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:14.037170   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:14.037087   79868 retry.go:31] will retry after 1.772679163s: waiting for machine to come up
	I0816 00:33:15.811699   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:15.812300   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:15.812334   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:15.812226   79868 retry.go:31] will retry after 3.026936601s: waiting for machine to come up
	I0816 00:33:18.842362   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:18.842759   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:18.842788   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:18.842715   79868 retry.go:31] will retry after 4.400445691s: waiting for machine to come up
	I0816 00:33:23.247813   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248223   78713 main.go:141] libmachine: (embed-certs-758469) Found IP for machine: 192.168.39.185
	I0816 00:33:23.248254   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has current primary IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248265   78713 main.go:141] libmachine: (embed-certs-758469) Reserving static IP address...
	I0816 00:33:23.248613   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.248641   78713 main.go:141] libmachine: (embed-certs-758469) DBG | skip adding static IP to network mk-embed-certs-758469 - found existing host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"}
	I0816 00:33:23.248654   78713 main.go:141] libmachine: (embed-certs-758469) Reserved static IP address: 192.168.39.185
	I0816 00:33:23.248673   78713 main.go:141] libmachine: (embed-certs-758469) Waiting for SSH to be available...
	I0816 00:33:23.248687   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Getting to WaitForSSH function...
	I0816 00:33:23.250607   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.250931   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.250965   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.251113   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH client type: external
	I0816 00:33:23.251141   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa (-rw-------)
	I0816 00:33:23.251179   78713 main.go:141] libmachine: (embed-certs-758469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:23.251196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | About to run SSH command:
	I0816 00:33:23.251211   78713 main.go:141] libmachine: (embed-certs-758469) DBG | exit 0
	I0816 00:33:23.373899   78713 main.go:141] libmachine: (embed-certs-758469) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:23.374270   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetConfigRaw
	I0816 00:33:23.374914   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.377034   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377343   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.377370   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377561   78713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/config.json ...
	I0816 00:33:23.377760   78713 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:23.377776   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:23.378014   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.379950   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380248   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.380277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380369   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.380524   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380668   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380795   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.380950   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.381134   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.381145   78713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:23.486074   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:23.486106   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486462   78713 buildroot.go:166] provisioning hostname "embed-certs-758469"
	I0816 00:33:23.486491   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486677   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.489520   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.489905   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.489924   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.490108   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.490279   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490427   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490566   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.490730   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.490901   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.490920   78713 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-758469 && echo "embed-certs-758469" | sudo tee /etc/hostname
	I0816 00:33:23.614635   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-758469
	
	I0816 00:33:23.614671   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.617308   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617673   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.617701   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617881   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.618087   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618351   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.618536   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.618721   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.618746   78713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-758469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-758469/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-758469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:23.734901   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:23.734931   78713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:23.734946   78713 buildroot.go:174] setting up certificates
	I0816 00:33:23.734953   78713 provision.go:84] configureAuth start
	I0816 00:33:23.734961   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.735255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.737952   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738312   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.738341   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738445   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.740589   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.740926   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.740953   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.741060   78713 provision.go:143] copyHostCerts
	I0816 00:33:23.741121   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:23.741138   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:23.741203   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:23.741357   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:23.741367   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:23.741393   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:23.741452   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:23.741458   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:23.741478   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:23.741525   78713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.embed-certs-758469 san=[127.0.0.1 192.168.39.185 embed-certs-758469 localhost minikube]
	I0816 00:33:23.871115   78713 provision.go:177] copyRemoteCerts
	I0816 00:33:23.871167   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:23.871190   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.874049   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874505   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.874538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874720   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.874913   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.875079   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.875210   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:23.959910   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:23.984454   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:33:24.009067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:24.036195   78713 provision.go:87] duration metric: took 301.229994ms to configureAuth
	I0816 00:33:24.036218   78713 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:24.036389   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:24.036453   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.039196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.039562   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039771   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.039970   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040125   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040224   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.040372   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.040584   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.040612   78713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:24.550693   78747 start.go:364] duration metric: took 4m44.527028624s to acquireMachinesLock for "default-k8s-diff-port-616827"
	I0816 00:33:24.550757   78747 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:24.550763   78747 fix.go:54] fixHost starting: 
	I0816 00:33:24.551164   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:24.551203   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:24.567741   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0816 00:33:24.568138   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:24.568674   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:33:24.568703   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:24.569017   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:24.569212   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:24.569385   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:33:24.570856   78747 fix.go:112] recreateIfNeeded on default-k8s-diff-port-616827: state=Stopped err=<nil>
	I0816 00:33:24.570901   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	W0816 00:33:24.571074   78747 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:24.572673   78747 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-616827" ...
	I0816 00:33:24.574220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Start
	I0816 00:33:24.574403   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring networks are active...
	I0816 00:33:24.575086   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network default is active
	I0816 00:33:24.575528   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network mk-default-k8s-diff-port-616827 is active
	I0816 00:33:24.576033   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Getting domain xml...
	I0816 00:33:24.576734   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Creating domain...
	I0816 00:33:24.314921   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:24.314951   78713 machine.go:96] duration metric: took 937.178488ms to provisionDockerMachine
	I0816 00:33:24.314964   78713 start.go:293] postStartSetup for "embed-certs-758469" (driver="kvm2")
	I0816 00:33:24.314974   78713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:24.315007   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.315405   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:24.315430   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.317962   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318242   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.318270   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318390   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.318588   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.318763   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.318900   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.400628   78713 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:24.405061   78713 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:24.405082   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:24.405148   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:24.405215   78713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:24.405302   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:24.414985   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:24.439646   78713 start.go:296] duration metric: took 124.668147ms for postStartSetup
	I0816 00:33:24.439692   78713 fix.go:56] duration metric: took 20.017583324s for fixHost
	I0816 00:33:24.439719   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.442551   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.442920   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.442954   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.443051   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.443257   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443434   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443567   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.443740   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.443912   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.443921   78713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:24.550562   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768404.525876526
	
	I0816 00:33:24.550588   78713 fix.go:216] guest clock: 1723768404.525876526
	I0816 00:33:24.550599   78713 fix.go:229] Guest: 2024-08-16 00:33:24.525876526 +0000 UTC Remote: 2024-08-16 00:33:24.439696953 +0000 UTC m=+285.318245053 (delta=86.179573ms)
	I0816 00:33:24.550618   78713 fix.go:200] guest clock delta is within tolerance: 86.179573ms
	I0816 00:33:24.550623   78713 start.go:83] releasing machines lock for "embed-certs-758469", held for 20.128541713s
	I0816 00:33:24.550647   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.551090   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:24.554013   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554358   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.554382   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554572   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555062   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555222   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555279   78713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:24.555330   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.555441   78713 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:24.555463   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.558216   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558368   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558567   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558719   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558723   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558742   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558925   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559074   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559205   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559285   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.559329   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.656926   78713 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:24.662590   78713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:24.811290   78713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:24.817486   78713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:24.817570   78713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:24.838317   78713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:24.838342   78713 start.go:495] detecting cgroup driver to use...
	I0816 00:33:24.838396   78713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:24.856294   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:24.875603   78713 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:24.875650   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:24.890144   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:24.904327   78713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:25.018130   78713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:25.149712   78713 docker.go:233] disabling docker service ...
	I0816 00:33:25.149795   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:25.165494   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:25.179554   78713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:25.330982   78713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:25.476436   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:25.493242   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:25.515688   78713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:25.515762   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.529924   78713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:25.529997   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.541412   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.551836   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.563356   78713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:25.574486   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.585533   78713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.604169   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.615335   78713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:25.629366   78713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:25.629427   78713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:25.645937   78713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:25.657132   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:25.771891   78713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:25.914817   78713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:25.914904   78713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:25.919572   78713 start.go:563] Will wait 60s for crictl version
	I0816 00:33:25.919620   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:33:25.923419   78713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:25.969387   78713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:25.969484   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.002529   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.035709   78713 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:26.036921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:26.039638   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040001   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:26.040023   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040254   78713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:26.044444   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:26.057172   78713 kubeadm.go:883] updating cluster {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:26.057326   78713 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:26.057382   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:26.093950   78713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:26.094031   78713 ssh_runner.go:195] Run: which lz4
	I0816 00:33:26.097998   78713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:26.102152   78713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:26.102183   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:27.538323   78713 crio.go:462] duration metric: took 1.440354469s to copy over tarball
	I0816 00:33:27.538400   78713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:25.885210   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting to get IP...
	I0816 00:33:25.886135   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886555   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:25.886538   80004 retry.go:31] will retry after 214.751664ms: waiting for machine to come up
	I0816 00:33:26.103182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103652   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103677   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.103603   80004 retry.go:31] will retry after 239.667632ms: waiting for machine to come up
	I0816 00:33:26.345223   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345776   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.345701   80004 retry.go:31] will retry after 474.740445ms: waiting for machine to come up
	I0816 00:33:26.822224   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822682   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822716   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.822639   80004 retry.go:31] will retry after 574.324493ms: waiting for machine to come up
	I0816 00:33:27.398433   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398939   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398971   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.398904   80004 retry.go:31] will retry after 567.388033ms: waiting for machine to come up
	I0816 00:33:27.967686   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968225   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.968093   80004 retry.go:31] will retry after 940.450394ms: waiting for machine to come up
	I0816 00:33:28.910549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911058   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911088   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:28.911031   80004 retry.go:31] will retry after 919.494645ms: waiting for machine to come up
	I0816 00:33:29.832687   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833204   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833244   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:29.833189   80004 retry.go:31] will retry after 1.332024716s: waiting for machine to come up
	I0816 00:33:29.677224   78713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.138774475s)
	I0816 00:33:29.677252   78713 crio.go:469] duration metric: took 2.138901242s to extract the tarball
	I0816 00:33:29.677261   78713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:29.716438   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:29.768597   78713 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:29.768622   78713 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:29.768634   78713 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.0 crio true true} ...
	I0816 00:33:29.768787   78713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-758469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:29.768874   78713 ssh_runner.go:195] Run: crio config
	I0816 00:33:29.813584   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:29.813607   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:29.813620   78713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:29.813644   78713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-758469 NodeName:embed-certs-758469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:29.813776   78713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-758469"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:29.813862   78713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:29.825680   78713 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:29.825744   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:29.836314   78713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 00:33:29.853030   78713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:29.869368   78713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 00:33:29.886814   78713 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:29.890644   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:29.903138   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:30.040503   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:30.058323   78713 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469 for IP: 192.168.39.185
	I0816 00:33:30.058351   78713 certs.go:194] generating shared ca certs ...
	I0816 00:33:30.058372   78713 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:30.058559   78713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:30.058624   78713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:30.058638   78713 certs.go:256] generating profile certs ...
	I0816 00:33:30.058778   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/client.key
	I0816 00:33:30.058873   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key.0d0e36ad
	I0816 00:33:30.058930   78713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key
	I0816 00:33:30.059101   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:30.059146   78713 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:30.059162   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:30.059197   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:30.059251   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:30.059285   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:30.059345   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:30.060202   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:30.098381   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:30.135142   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:30.175518   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:30.214349   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 00:33:30.249278   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:30.273772   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:30.298067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:30.324935   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:30.351149   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:30.375636   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:30.399250   78713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:30.417646   78713 ssh_runner.go:195] Run: openssl version
	I0816 00:33:30.423691   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:30.435254   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439651   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439700   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.445673   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:30.456779   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:30.467848   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472199   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472274   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.478109   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:30.489481   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:30.500747   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505116   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505162   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.510739   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:30.521829   78713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:30.526444   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:30.532373   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:30.538402   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:30.544697   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:30.550762   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:30.556573   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:30.562513   78713 kubeadm.go:392] StartCluster: {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:30.562602   78713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:30.562650   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.607119   78713 cri.go:89] found id: ""
	I0816 00:33:30.607197   78713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:30.617798   78713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:30.617818   78713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:30.617873   78713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:30.627988   78713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:30.628976   78713 kubeconfig.go:125] found "embed-certs-758469" server: "https://192.168.39.185:8443"
	I0816 00:33:30.631601   78713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:30.642001   78713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.185
	I0816 00:33:30.642036   78713 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:30.642047   78713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:30.642088   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.685946   78713 cri.go:89] found id: ""
	I0816 00:33:30.686049   78713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:30.704130   78713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:30.714467   78713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:30.714490   78713 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:30.714534   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:33:30.723924   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:30.723985   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:30.733804   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:33:30.743345   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:30.743412   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:30.753604   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.763271   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:30.763340   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.773121   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:33:30.782507   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:30.782565   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:30.792652   78713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:30.802523   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:30.923193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.206424   78713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.283195087s)
	I0816 00:33:32.206449   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.435275   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.509193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.590924   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:32.591020   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.091804   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.591198   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.607568   78713 api_server.go:72] duration metric: took 1.016656713s to wait for apiserver process to appear ...
	I0816 00:33:33.607596   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:33.607619   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:31.166506   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166900   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166927   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:31.166860   80004 retry.go:31] will retry after 1.213971674s: waiting for machine to come up
	I0816 00:33:32.382376   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382862   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382889   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:32.382821   80004 retry.go:31] will retry after 2.115615681s: waiting for machine to come up
	I0816 00:33:34.501236   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501697   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:34.501646   80004 retry.go:31] will retry after 2.495252025s: waiting for machine to come up
	I0816 00:33:36.334341   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.334374   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.334389   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.351971   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.352011   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.608364   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.614582   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:36.614619   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.107654   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.113352   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.113384   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.607902   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.614677   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.614710   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.108329   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.112493   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.112521   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.608061   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.613134   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.613172   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.107667   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.111920   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:39.111954   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.608190   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.613818   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:33:39.619467   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:39.619490   78713 api_server.go:131] duration metric: took 6.011887872s to wait for apiserver health ...
	I0816 00:33:39.619499   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:39.619504   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:39.621572   78713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:36.999158   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999616   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999645   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:36.999576   80004 retry.go:31] will retry after 2.736710806s: waiting for machine to come up
	I0816 00:33:39.737818   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738286   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738320   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:39.738215   80004 retry.go:31] will retry after 3.3205645s: waiting for machine to come up
	I0816 00:33:39.623254   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:39.633910   78713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:39.653736   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:39.663942   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:39.663983   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:39.663994   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:39.664044   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:39.664060   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:39.664067   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:33:39.664078   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:39.664089   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:39.664107   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:33:39.664118   78713 system_pods.go:74] duration metric: took 10.358906ms to wait for pod list to return data ...
	I0816 00:33:39.664127   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:39.667639   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:39.667669   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:39.667682   78713 node_conditions.go:105] duration metric: took 3.547018ms to run NodePressure ...
	I0816 00:33:39.667701   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:39.929620   78713 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934264   78713 kubeadm.go:739] kubelet initialised
	I0816 00:33:39.934289   78713 kubeadm.go:740] duration metric: took 4.64037ms waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934299   78713 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:39.938771   78713 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.943735   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943760   78713 pod_ready.go:82] duration metric: took 4.962601ms for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.943772   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943781   78713 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.947900   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947925   78713 pod_ready.go:82] duration metric: took 4.129605ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.947936   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947943   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.953367   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953400   78713 pod_ready.go:82] duration metric: took 5.445682ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.953412   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953422   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.057510   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057533   78713 pod_ready.go:82] duration metric: took 104.099944ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.057543   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057548   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.458355   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458389   78713 pod_ready.go:82] duration metric: took 400.832009ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.458400   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458408   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.857939   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857964   78713 pod_ready.go:82] duration metric: took 399.549123ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.857974   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857980   78713 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:41.257101   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257126   78713 pod_ready.go:82] duration metric: took 399.13078ms for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:41.257135   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257142   78713 pod_ready.go:39] duration metric: took 1.322827054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:41.257159   78713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:33:41.269076   78713 ops.go:34] apiserver oom_adj: -16
	I0816 00:33:41.269098   78713 kubeadm.go:597] duration metric: took 10.651273415s to restartPrimaryControlPlane
	I0816 00:33:41.269107   78713 kubeadm.go:394] duration metric: took 10.706599955s to StartCluster
	I0816 00:33:41.269127   78713 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.269191   78713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:33:41.271380   78713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.271679   78713 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:33:41.271714   78713 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:33:41.271812   78713 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-758469"
	I0816 00:33:41.271834   78713 addons.go:69] Setting default-storageclass=true in profile "embed-certs-758469"
	I0816 00:33:41.271845   78713 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-758469"
	W0816 00:33:41.271858   78713 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:33:41.271874   78713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-758469"
	I0816 00:33:41.271882   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:41.271891   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.271860   78713 addons.go:69] Setting metrics-server=true in profile "embed-certs-758469"
	I0816 00:33:41.271934   78713 addons.go:234] Setting addon metrics-server=true in "embed-certs-758469"
	W0816 00:33:41.271952   78713 addons.go:243] addon metrics-server should already be in state true
	I0816 00:33:41.272022   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.272324   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272575   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272604   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272704   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272718   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272745   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.274599   78713 out.go:177] * Verifying Kubernetes components...
	I0816 00:33:41.276283   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:41.292526   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0816 00:33:41.292560   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0816 00:33:41.292556   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I0816 00:33:41.293000   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293053   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293004   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293482   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293499   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293592   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293606   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293625   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293607   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293891   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293939   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293976   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.294132   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.294475   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294483   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294517   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.294522   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.297714   78713 addons.go:234] Setting addon default-storageclass=true in "embed-certs-758469"
	W0816 00:33:41.297747   78713 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:33:41.297787   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.298192   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.298238   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.310002   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0816 00:33:41.310000   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0816 00:33:41.310469   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310521   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310899   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.310917   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311027   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.311048   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311293   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311476   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.311491   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311642   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.313614   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.313697   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.315474   78713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:33:41.315484   78713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:33:41.316719   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0816 00:33:41.316887   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:33:41.316902   78713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:33:41.316921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.316975   78713 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.316985   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:33:41.316995   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.317061   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.317572   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.317594   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.317941   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.318669   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.318702   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.320288   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320668   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.320695   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320726   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320939   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321241   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.321267   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.321402   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321497   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.321547   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321592   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.321883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.322021   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.334230   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0816 00:33:41.334580   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.335088   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.335107   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.335387   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.335549   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.336891   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.337084   78713 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.337100   78713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:33:41.337115   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.340204   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340667   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.340697   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340837   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.340987   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.341120   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.341277   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.476131   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:41.502242   78713 node_ready.go:35] waiting up to 6m0s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:41.559562   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.575913   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:33:41.575937   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:33:41.614763   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:33:41.614784   78713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:33:41.628658   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.670367   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:41.670393   78713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:33:41.746638   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:42.849125   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.22043382s)
	I0816 00:33:42.849189   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849202   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849397   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.289807606s)
	I0816 00:33:42.849438   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849448   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849478   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849514   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849524   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849538   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849550   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849761   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849803   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849813   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849825   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849833   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.850018   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850033   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.850059   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850059   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.850078   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856398   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.856419   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.856647   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.856667   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856676   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901261   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1545817s)
	I0816 00:33:42.901314   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901329   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901619   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901680   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901694   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901704   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901713   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901953   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901973   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901986   78713 addons.go:475] Verifying addon metrics-server=true in "embed-certs-758469"
	I0816 00:33:42.904677   78713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 00:33:42.905802   78713 addons.go:510] duration metric: took 1.634089536s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 00:33:43.506584   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:44.254575   79191 start.go:364] duration metric: took 3m52.362627542s to acquireMachinesLock for "old-k8s-version-098619"
	I0816 00:33:44.254648   79191 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:44.254659   79191 fix.go:54] fixHost starting: 
	I0816 00:33:44.255099   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:44.255137   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:44.271236   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0816 00:33:44.271591   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:44.272030   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:33:44.272052   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:44.272328   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:44.272503   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:33:44.272660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetState
	I0816 00:33:44.274235   79191 fix.go:112] recreateIfNeeded on old-k8s-version-098619: state=Stopped err=<nil>
	I0816 00:33:44.274272   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	W0816 00:33:44.274415   79191 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:44.275978   79191 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-098619" ...
	I0816 00:33:43.059949   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060413   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Found IP for machine: 192.168.50.128
	I0816 00:33:43.060440   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserving static IP address...
	I0816 00:33:43.060479   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has current primary IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060881   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.060906   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | skip adding static IP to network mk-default-k8s-diff-port-616827 - found existing host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"}
	I0816 00:33:43.060921   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserved static IP address: 192.168.50.128
	I0816 00:33:43.060937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for SSH to be available...
	I0816 00:33:43.060952   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Getting to WaitForSSH function...
	I0816 00:33:43.063249   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063552   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.063592   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH client type: external
	I0816 00:33:43.063833   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa (-rw-------)
	I0816 00:33:43.063877   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:43.063896   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | About to run SSH command:
	I0816 00:33:43.063905   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | exit 0
	I0816 00:33:43.185986   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:43.186338   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetConfigRaw
	I0816 00:33:43.186944   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.189324   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189617   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.189643   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189890   78747 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/config.json ...
	I0816 00:33:43.190166   78747 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:43.190192   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:43.190401   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.192515   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192836   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.192865   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192940   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.193118   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193280   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193454   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.193614   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.193812   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.193825   78747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:43.290143   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:43.290168   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290395   78747 buildroot.go:166] provisioning hostname "default-k8s-diff-port-616827"
	I0816 00:33:43.290422   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290603   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.293231   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.293665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293829   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.294038   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294195   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294325   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.294479   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.294685   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.294703   78747 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-616827 && echo "default-k8s-diff-port-616827" | sudo tee /etc/hostname
	I0816 00:33:43.406631   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-616827
	
	I0816 00:33:43.406655   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.409271   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409610   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.409641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409794   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.409984   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410160   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.410491   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.410670   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.410695   78747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-616827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-616827/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-616827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:43.515766   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:43.515796   78747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:43.515829   78747 buildroot.go:174] setting up certificates
	I0816 00:33:43.515841   78747 provision.go:84] configureAuth start
	I0816 00:33:43.515850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.516128   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.518730   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519055   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.519087   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.521186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.521538   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521691   78747 provision.go:143] copyHostCerts
	I0816 00:33:43.521746   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:43.521764   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:43.521822   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:43.521949   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:43.521959   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:43.521982   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:43.522050   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:43.522057   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:43.522074   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:43.522132   78747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-616827 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-616827 localhost minikube]
	I0816 00:33:43.601126   78747 provision.go:177] copyRemoteCerts
	I0816 00:33:43.601179   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:43.601203   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.603816   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604148   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.604180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.604549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.604725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.604863   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:43.686829   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:43.712297   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 00:33:43.738057   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:43.762820   78747 provision.go:87] duration metric: took 246.967064ms to configureAuth
	I0816 00:33:43.762853   78747 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:43.763069   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:43.763155   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.765886   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766256   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.766287   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766447   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.766641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766813   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.767164   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.767318   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.767334   78747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:44.025337   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:44.025373   78747 machine.go:96] duration metric: took 835.190539ms to provisionDockerMachine
	I0816 00:33:44.025387   78747 start.go:293] postStartSetup for "default-k8s-diff-port-616827" (driver="kvm2")
	I0816 00:33:44.025401   78747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:44.025416   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.025780   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:44.025804   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.028307   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028591   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.028618   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028740   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.028925   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.029117   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.029281   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.109481   78747 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:44.115290   78747 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:44.115317   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:44.115388   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:44.115482   78747 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:44.115597   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:44.128677   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:44.154643   78747 start.go:296] duration metric: took 129.242138ms for postStartSetup
	I0816 00:33:44.154685   78747 fix.go:56] duration metric: took 19.603921801s for fixHost
	I0816 00:33:44.154705   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.157477   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.157907   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.157937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.158051   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.158264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158411   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158580   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.158757   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:44.158981   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:44.158996   78747 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:44.254419   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768424.226223949
	
	I0816 00:33:44.254443   78747 fix.go:216] guest clock: 1723768424.226223949
	I0816 00:33:44.254452   78747 fix.go:229] Guest: 2024-08-16 00:33:44.226223949 +0000 UTC Remote: 2024-08-16 00:33:44.154688835 +0000 UTC m=+304.265683075 (delta=71.535114ms)
	I0816 00:33:44.254476   78747 fix.go:200] guest clock delta is within tolerance: 71.535114ms
	I0816 00:33:44.254482   78747 start.go:83] releasing machines lock for "default-k8s-diff-port-616827", held for 19.703745588s
	I0816 00:33:44.254504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.254750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:44.257516   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.257879   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.257910   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.258111   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258828   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258908   78747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:44.258946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.259033   78747 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:44.259048   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.261566   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261814   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261978   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262008   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262112   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262145   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262254   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262390   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262442   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262502   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.262549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262642   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.346934   78747 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:44.370413   78747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:44.519130   78747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:44.525276   78747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:44.525344   78747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:44.549125   78747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:44.549154   78747 start.go:495] detecting cgroup driver to use...
	I0816 00:33:44.549227   78747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:44.575221   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:44.592214   78747 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:44.592270   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:44.607403   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:44.629127   78747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:44.786185   78747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:44.954426   78747 docker.go:233] disabling docker service ...
	I0816 00:33:44.954495   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:44.975169   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:44.994113   78747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:45.142572   78747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:45.297255   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:45.313401   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:45.334780   78747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:45.334851   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.346039   78747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:45.346111   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.357681   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.368607   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.381164   78747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:45.394060   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.406010   78747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.424720   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.437372   78747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:45.450515   78747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:45.450595   78747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:45.465740   78747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:45.476568   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:45.629000   78747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:45.781044   78747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:45.781142   78747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:45.787480   78747 start.go:563] Will wait 60s for crictl version
	I0816 00:33:45.787551   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:33:45.791907   78747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:45.836939   78747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:45.837025   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.869365   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.907162   78747 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:44.277288   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .Start
	I0816 00:33:44.277426   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring networks are active...
	I0816 00:33:44.278141   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network default is active
	I0816 00:33:44.278471   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network mk-old-k8s-version-098619 is active
	I0816 00:33:44.278820   79191 main.go:141] libmachine: (old-k8s-version-098619) Getting domain xml...
	I0816 00:33:44.279523   79191 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:33:45.643704   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting to get IP...
	I0816 00:33:45.644691   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.645213   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.645247   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.645162   80212 retry.go:31] will retry after 198.057532ms: waiting for machine to come up
	I0816 00:33:45.844756   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.845297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.845321   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.845247   80212 retry.go:31] will retry after 288.630433ms: waiting for machine to come up
	I0816 00:33:46.135913   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.136413   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.136442   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.136365   80212 retry.go:31] will retry after 456.48021ms: waiting for machine to come up
	I0816 00:33:46.594170   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.594649   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.594678   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.594592   80212 retry.go:31] will retry after 501.49137ms: waiting for machine to come up
	I0816 00:33:46.006040   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:47.007144   78713 node_ready.go:49] node "embed-certs-758469" has status "Ready":"True"
	I0816 00:33:47.007172   78713 node_ready.go:38] duration metric: took 5.504897396s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:47.007183   78713 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:47.014800   78713 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:49.022567   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:45.908518   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:45.912248   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.912762   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:45.912797   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.913115   78747 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:45.917917   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:45.935113   78747 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:45.935294   78747 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:45.935351   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:45.988031   78747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:45.988115   78747 ssh_runner.go:195] Run: which lz4
	I0816 00:33:45.992508   78747 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:45.997108   78747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:45.997199   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:47.459404   78747 crio.go:462] duration metric: took 1.466928999s to copy over tarball
	I0816 00:33:47.459478   78747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:49.621449   78747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16194292s)
	I0816 00:33:49.621484   78747 crio.go:469] duration metric: took 2.162054092s to extract the tarball
	I0816 00:33:49.621494   78747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:49.660378   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:49.709446   78747 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:49.709471   78747 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:49.709481   78747 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.0 crio true true} ...
	I0816 00:33:49.709609   78747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-616827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:49.709704   78747 ssh_runner.go:195] Run: crio config
	I0816 00:33:49.756470   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:49.756497   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:49.756510   78747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:49.756534   78747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-616827 NodeName:default-k8s-diff-port-616827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:49.756745   78747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-616827"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:49.756827   78747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:49.766769   78747 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:49.766840   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:49.776367   78747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 00:33:49.793191   78747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:49.811993   78747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 00:33:49.829787   78747 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:49.833673   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:49.846246   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:47.098130   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.098614   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.098645   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.098569   80212 retry.go:31] will retry after 663.568587ms: waiting for machine to come up
	I0816 00:33:47.763930   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.764447   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.764470   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.764376   80212 retry.go:31] will retry after 679.581678ms: waiting for machine to come up
	I0816 00:33:48.446082   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:48.446552   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:48.446579   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:48.446498   80212 retry.go:31] will retry after 1.090430732s: waiting for machine to come up
	I0816 00:33:49.538961   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:49.539454   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:49.539482   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:49.539397   80212 retry.go:31] will retry after 1.039148258s: waiting for machine to come up
	I0816 00:33:50.579642   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:50.580119   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:50.580144   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:50.580074   80212 retry.go:31] will retry after 1.440992413s: waiting for machine to come up
	I0816 00:33:51.788858   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:54.022577   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:49.963020   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:49.980142   78747 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827 for IP: 192.168.50.128
	I0816 00:33:49.980170   78747 certs.go:194] generating shared ca certs ...
	I0816 00:33:49.980192   78747 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:49.980408   78747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:49.980470   78747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:49.980489   78747 certs.go:256] generating profile certs ...
	I0816 00:33:49.980583   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/client.key
	I0816 00:33:49.980669   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key.2062a467
	I0816 00:33:49.980737   78747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key
	I0816 00:33:49.980891   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:49.980940   78747 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:49.980949   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:49.980984   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:49.981021   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:49.981050   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:49.981102   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:49.981835   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:50.014530   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:50.057377   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:50.085730   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:50.121721   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 00:33:50.166448   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:50.195059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:50.220059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:50.244288   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:50.268463   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:50.293203   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:50.318859   78747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:50.336625   78747 ssh_runner.go:195] Run: openssl version
	I0816 00:33:50.343301   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:50.355408   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360245   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360312   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.366435   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:50.377753   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:50.389482   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394337   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394419   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.400279   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:50.412410   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:50.424279   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429013   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429077   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.435095   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:50.448148   78747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:50.453251   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:50.459730   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:50.466145   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:50.472438   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:50.478701   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:50.485081   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:50.490958   78747 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:50.491091   78747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:50.491173   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.545458   78747 cri.go:89] found id: ""
	I0816 00:33:50.545532   78747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:50.557054   78747 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:50.557074   78747 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:50.557122   78747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:50.570313   78747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:50.571774   78747 kubeconfig.go:125] found "default-k8s-diff-port-616827" server: "https://192.168.50.128:8444"
	I0816 00:33:50.574969   78747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:50.586066   78747 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I0816 00:33:50.586101   78747 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:50.586114   78747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:50.586172   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.631347   78747 cri.go:89] found id: ""
	I0816 00:33:50.631416   78747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:50.651296   78747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:50.665358   78747 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:50.665387   78747 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:50.665427   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 00:33:50.678634   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:50.678706   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:50.690376   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 00:33:50.702070   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:50.702132   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:50.714117   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.725349   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:50.725413   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.735691   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 00:33:50.745524   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:50.745598   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:50.756310   78747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:50.771825   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:50.908593   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.046812   78747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138178717s)
	I0816 00:33:52.046863   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.282111   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.357877   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.485435   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:52.485531   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:52.985717   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.486461   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.522663   78747 api_server.go:72] duration metric: took 1.037234176s to wait for apiserver process to appear ...
	I0816 00:33:53.522692   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:53.522713   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:52.022573   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:52.023319   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:52.023352   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:52.023226   80212 retry.go:31] will retry after 1.814668747s: waiting for machine to come up
	I0816 00:33:53.839539   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:53.839916   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:53.839944   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:53.839861   80212 retry.go:31] will retry after 1.900379439s: waiting for machine to come up
	I0816 00:33:55.742480   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:55.742981   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:55.743004   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:55.742920   80212 retry.go:31] will retry after 2.798728298s: waiting for machine to come up
	I0816 00:33:56.782681   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.782714   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:56.782730   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:56.828595   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.828628   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:57.022870   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.028291   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.028326   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:57.522858   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.533079   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.533120   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.023304   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.029913   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:58.029948   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.523517   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.529934   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:33:58.536872   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:58.536898   78747 api_server.go:131] duration metric: took 5.014199256s to wait for apiserver health ...
	I0816 00:33:58.536907   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:58.536916   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:58.539004   78747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:54.522157   78713 pod_ready.go:93] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.522186   78713 pod_ready.go:82] duration metric: took 7.507358513s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.522201   78713 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529305   78713 pod_ready.go:93] pod "etcd-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.529323   78713 pod_ready.go:82] duration metric: took 7.114484ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529331   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536656   78713 pod_ready.go:93] pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.536688   78713 pod_ready.go:82] duration metric: took 7.349231ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536701   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542615   78713 pod_ready.go:93] pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.542637   78713 pod_ready.go:82] duration metric: took 5.927403ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542650   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548165   78713 pod_ready.go:93] pod "kube-proxy-4xc89" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.548188   78713 pod_ready.go:82] duration metric: took 5.530073ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548200   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919561   78713 pod_ready.go:93] pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.919586   78713 pod_ready.go:82] duration metric: took 371.377774ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919598   78713 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:56.925892   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.926811   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.540592   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:58.554493   78747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:58.594341   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:58.605247   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:58.605293   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:58.605304   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:58.605314   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:58.605329   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:58.605342   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:33:58.605351   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:58.605358   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:58.605363   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:33:58.605372   78747 system_pods.go:74] duration metric: took 11.009517ms to wait for pod list to return data ...
	I0816 00:33:58.605384   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:58.609964   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:58.609996   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:58.610007   78747 node_conditions.go:105] duration metric: took 4.615471ms to run NodePressure ...
	I0816 00:33:58.610025   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:58.930292   78747 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937469   78747 kubeadm.go:739] kubelet initialised
	I0816 00:33:58.937499   78747 kubeadm.go:740] duration metric: took 7.181814ms waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937509   78747 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:59.036968   78747 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.046554   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046589   78747 pod_ready.go:82] duration metric: took 9.589918ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.046601   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046618   78747 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.053621   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053654   78747 pod_ready.go:82] duration metric: took 7.022323ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.053669   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053678   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.065329   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065357   78747 pod_ready.go:82] duration metric: took 11.650757ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.065378   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065387   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.074595   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074627   78747 pod_ready.go:82] duration metric: took 9.230183ms for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.074643   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074657   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.399077   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399105   78747 pod_ready.go:82] duration metric: took 324.440722ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.399116   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399124   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.797130   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797158   78747 pod_ready.go:82] duration metric: took 398.024149ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.797169   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797176   78747 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:00.197929   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197961   78747 pod_ready.go:82] duration metric: took 400.777243ms for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:34:00.197976   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197992   78747 pod_ready.go:39] duration metric: took 1.260464876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:00.198024   78747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:34:00.210255   78747 ops.go:34] apiserver oom_adj: -16
	I0816 00:34:00.210278   78747 kubeadm.go:597] duration metric: took 9.653197586s to restartPrimaryControlPlane
	I0816 00:34:00.210302   78747 kubeadm.go:394] duration metric: took 9.719364617s to StartCluster
	I0816 00:34:00.210322   78747 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.210405   78747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:00.212730   78747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.213053   78747 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:34:00.213162   78747 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:34:00.213247   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:00.213277   78747 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213292   78747 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213305   78747 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213313   78747 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:34:00.213344   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213352   78747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-616827"
	I0816 00:34:00.213298   78747 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213413   78747 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213435   78747 addons.go:243] addon metrics-server should already be in state true
	I0816 00:34:00.213463   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213751   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213795   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213752   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213886   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213756   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213992   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.215058   78747 out.go:177] * Verifying Kubernetes components...
	I0816 00:34:00.216719   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:00.229428   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0816 00:34:00.229676   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0816 00:34:00.229881   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230164   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230522   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230538   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230689   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230727   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230850   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.231488   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.231512   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.231754   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.232394   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.232426   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.232909   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0816 00:34:00.233400   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.233959   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.233979   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.234368   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.234576   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.238180   78747 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.238203   78747 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:34:00.238230   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.238598   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.238642   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.249682   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0816 00:34:00.250163   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.250894   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.250919   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.251326   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0816 00:34:00.251324   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.251663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.251828   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.252294   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.252318   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.252863   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.253070   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.253746   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.254958   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.255056   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0816 00:34:00.255513   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.256043   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.256083   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.256121   78747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:00.256494   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.257255   78747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:34:00.257377   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.257422   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.259132   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:34:00.259154   78747 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:34:00.259176   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.259204   78747 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.259223   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:34:00.259241   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.263096   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263213   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263688   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263874   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263996   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264175   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264441   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.264511   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264695   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.274557   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0816 00:34:00.274984   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.275444   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.275463   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.275735   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.275946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.277509   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.277745   78747 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.277762   78747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:34:00.277782   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.280264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280660   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.280689   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280790   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.280982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.281140   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.281286   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.445986   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:00.465112   78747 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:00.568927   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.602693   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.620335   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:34:00.620355   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:34:00.667790   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:34:00.667810   78747 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:34:00.698510   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.698536   78747 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:34:00.723319   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.975635   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.975663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976006   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976007   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976030   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.976044   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.976075   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976347   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976340   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976376   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.983280   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.983304   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.983587   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.983586   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.983620   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.678707   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678733   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.678889   78747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.076166351s)
	I0816 00:34:01.678936   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678955   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679115   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679136   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679145   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679153   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679473   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679497   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679484   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679514   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679521   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679525   78747 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-616827"
	I0816 00:34:01.679528   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679537   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679544   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679821   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679862   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679887   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.683006   78747 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 00:33:58.543282   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:58.543753   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:58.543783   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:58.543689   80212 retry.go:31] will retry after 4.402812235s: waiting for machine to come up
	I0816 00:34:00.927244   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:03.428032   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:04.178649   78489 start.go:364] duration metric: took 54.753990439s to acquireMachinesLock for "no-preload-819398"
	I0816 00:34:04.178706   78489 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:34:04.178714   78489 fix.go:54] fixHost starting: 
	I0816 00:34:04.179124   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:04.179162   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:04.195783   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0816 00:34:04.196138   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:04.196590   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:34:04.196614   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:04.196962   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:04.197161   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:04.197303   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:34:04.198795   78489 fix.go:112] recreateIfNeeded on no-preload-819398: state=Stopped err=<nil>
	I0816 00:34:04.198814   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	W0816 00:34:04.198978   78489 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:34:04.200736   78489 out.go:177] * Restarting existing kvm2 VM for "no-preload-819398" ...
	I0816 00:34:01.684641   78747 addons.go:510] duration metric: took 1.471480873s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 00:34:02.473603   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:04.476035   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:02.951078   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951631   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has current primary IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951672   79191 main.go:141] libmachine: (old-k8s-version-098619) Found IP for machine: 192.168.72.137
	I0816 00:34:02.951687   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserving static IP address...
	I0816 00:34:02.952154   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserved static IP address: 192.168.72.137
	I0816 00:34:02.952186   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.952201   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting for SSH to be available...
	I0816 00:34:02.952224   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | skip adding static IP to network mk-old-k8s-version-098619 - found existing host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"}
	I0816 00:34:02.952236   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Getting to WaitForSSH function...
	I0816 00:34:02.954361   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954686   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.954715   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954791   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH client type: external
	I0816 00:34:02.954830   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa (-rw-------)
	I0816 00:34:02.954871   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:02.954890   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | About to run SSH command:
	I0816 00:34:02.954909   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | exit 0
	I0816 00:34:03.078035   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:03.078408   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:34:03.079002   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.081041   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081391   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.081489   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081566   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:34:03.081748   79191 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:03.081767   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.082007   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.084022   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084333   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.084357   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084499   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.084700   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.084867   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.085074   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.085266   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.085509   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.085525   79191 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:03.186066   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:03.186094   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186368   79191 buildroot.go:166] provisioning hostname "old-k8s-version-098619"
	I0816 00:34:03.186397   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186597   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.189330   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189658   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.189702   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189792   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.190004   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190185   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190344   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.190481   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.190665   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.190688   79191 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098619 && echo "old-k8s-version-098619" | sudo tee /etc/hostname
	I0816 00:34:03.304585   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098619
	
	I0816 00:34:03.304608   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.307415   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307732   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.307763   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307955   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.308155   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308314   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308474   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.308629   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.308795   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.308811   79191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098619/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:03.418968   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:03.419010   79191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:03.419045   79191 buildroot.go:174] setting up certificates
	I0816 00:34:03.419058   79191 provision.go:84] configureAuth start
	I0816 00:34:03.419072   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.419338   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.421799   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422159   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.422198   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422401   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.425023   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425417   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.425445   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425557   79191 provision.go:143] copyHostCerts
	I0816 00:34:03.425624   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:03.425646   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:03.425717   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:03.425875   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:03.425888   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:03.425921   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:03.426007   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:03.426017   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:03.426045   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:03.426112   79191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098619 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-098619]
	I0816 00:34:03.509869   79191 provision.go:177] copyRemoteCerts
	I0816 00:34:03.509932   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:03.509961   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.512603   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.512938   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.512984   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.513163   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.513451   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.513617   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.513777   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:03.596330   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 00:34:03.621969   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:03.646778   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:03.671937   79191 provision.go:87] duration metric: took 252.867793ms to configureAuth
	I0816 00:34:03.671964   79191 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:03.672149   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:34:03.672250   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.675207   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675600   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.675625   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675787   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.676006   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676199   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676360   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.676549   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.676762   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.676779   79191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:03.945259   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:03.945287   79191 machine.go:96] duration metric: took 863.526642ms to provisionDockerMachine
	I0816 00:34:03.945298   79191 start.go:293] postStartSetup for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:34:03.945308   79191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:03.945335   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.945638   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:03.945666   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.948590   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.948967   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.948989   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.949152   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.949350   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.949491   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.949645   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.028994   79191 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:04.033776   79191 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:04.033799   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:04.033872   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:04.033943   79191 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:04.034033   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:04.045492   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:04.071879   79191 start.go:296] duration metric: took 126.569157ms for postStartSetup
	I0816 00:34:04.071920   79191 fix.go:56] duration metric: took 19.817260263s for fixHost
	I0816 00:34:04.071944   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.074942   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.075325   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075504   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.075699   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075846   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075977   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.076146   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:04.076319   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:04.076332   79191 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:04.178483   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768444.133390375
	
	I0816 00:34:04.178510   79191 fix.go:216] guest clock: 1723768444.133390375
	I0816 00:34:04.178519   79191 fix.go:229] Guest: 2024-08-16 00:34:04.133390375 +0000 UTC Remote: 2024-08-16 00:34:04.071925107 +0000 UTC m=+252.320651106 (delta=61.465268ms)
	I0816 00:34:04.178537   79191 fix.go:200] guest clock delta is within tolerance: 61.465268ms
	I0816 00:34:04.178541   79191 start.go:83] releasing machines lock for "old-k8s-version-098619", held for 19.923923778s
	I0816 00:34:04.178567   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.178875   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:04.181999   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182458   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.182490   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183192   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183357   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183412   79191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:04.183461   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.183553   79191 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:04.183575   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.186192   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186418   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186507   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186531   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186679   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.186811   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186836   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186850   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187016   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187032   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.187211   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187215   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.187364   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187488   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.283880   79191 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:04.289798   79191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:04.436822   79191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:04.443547   79191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:04.443631   79191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:04.464783   79191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:04.464807   79191 start.go:495] detecting cgroup driver to use...
	I0816 00:34:04.464873   79191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:04.481504   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:04.501871   79191 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:04.501942   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:04.521898   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:04.538186   79191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:04.704361   79191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:04.881682   79191 docker.go:233] disabling docker service ...
	I0816 00:34:04.881757   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:04.900264   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:04.916152   79191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:05.048440   79191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:05.166183   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:05.181888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:05.202525   79191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:34:05.202592   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.214655   79191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:05.214712   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.226052   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.236878   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.249217   79191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:05.260362   79191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:05.271039   79191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:05.271108   79191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:05.290423   79191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:05.307175   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:05.465815   79191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:05.640787   79191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:05.640878   79191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:05.646821   79191 start.go:563] Will wait 60s for crictl version
	I0816 00:34:05.646883   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:05.651455   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:05.698946   79191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:05.699037   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.729185   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.772063   79191 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:34:05.773406   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:05.776689   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777177   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:05.777241   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777435   79191 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:05.782377   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:05.797691   79191 kubeadm.go:883] updating cluster {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:05.797872   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:34:05.797953   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:05.861468   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:05.861557   79191 ssh_runner.go:195] Run: which lz4
	I0816 00:34:05.866880   79191 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:34:05.872036   79191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:34:05.872071   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:34:04.202120   78489 main.go:141] libmachine: (no-preload-819398) Calling .Start
	I0816 00:34:04.202293   78489 main.go:141] libmachine: (no-preload-819398) Ensuring networks are active...
	I0816 00:34:04.203062   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network default is active
	I0816 00:34:04.203345   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network mk-no-preload-819398 is active
	I0816 00:34:04.205286   78489 main.go:141] libmachine: (no-preload-819398) Getting domain xml...
	I0816 00:34:04.206025   78489 main.go:141] libmachine: (no-preload-819398) Creating domain...
	I0816 00:34:05.553661   78489 main.go:141] libmachine: (no-preload-819398) Waiting to get IP...
	I0816 00:34:05.554629   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.555210   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.555309   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.555211   80407 retry.go:31] will retry after 298.759084ms: waiting for machine to come up
	I0816 00:34:05.856046   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.856571   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.856604   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.856530   80407 retry.go:31] will retry after 293.278331ms: waiting for machine to come up
	I0816 00:34:06.151110   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.151542   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.151571   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.151498   80407 retry.go:31] will retry after 332.472371ms: waiting for machine to come up
	I0816 00:34:06.485927   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.486487   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.486514   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.486459   80407 retry.go:31] will retry after 600.720276ms: waiting for machine to come up
	I0816 00:34:05.926954   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.929140   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:06.972334   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:07.469652   78747 node_ready.go:49] node "default-k8s-diff-port-616827" has status "Ready":"True"
	I0816 00:34:07.469684   78747 node_ready.go:38] duration metric: took 7.004536271s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:07.469700   78747 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:07.476054   78747 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482839   78747 pod_ready.go:93] pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.482861   78747 pod_ready.go:82] duration metric: took 6.779315ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482871   78747 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489325   78747 pod_ready.go:93] pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.489348   78747 pod_ready.go:82] duration metric: took 6.470629ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489357   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495536   78747 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.495555   78747 pod_ready.go:82] duration metric: took 6.192295ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495565   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:09.503258   78747 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.631328   79191 crio.go:462] duration metric: took 1.76448771s to copy over tarball
	I0816 00:34:07.631413   79191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:34:10.662435   79191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.030990355s)
	I0816 00:34:10.662472   79191 crio.go:469] duration metric: took 3.031115615s to extract the tarball
	I0816 00:34:10.662482   79191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:34:10.707627   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:10.745704   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:10.745742   79191 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.745838   79191 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.745914   79191 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.745860   79191 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.745943   79191 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.745884   79191 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.746059   79191 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747781   79191 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.747803   79191 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.747808   79191 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.747824   79191 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.747842   79191 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.747883   79191 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.747895   79191 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747948   79191 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.916488   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.923947   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.931668   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.942764   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.948555   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.957593   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.970039   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:34:11.012673   79191 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:34:11.012707   79191 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.012778   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.026267   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:11.135366   79191 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:34:11.135398   79191 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.135451   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.149180   79191 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:34:11.149226   79191 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.149271   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183480   79191 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:34:11.183526   79191 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.183526   79191 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:34:11.183578   79191 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.183584   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183637   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186513   79191 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:34:11.186559   79191 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.186622   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186632   79191 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:34:11.186658   79191 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:34:11.186699   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186722   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.252857   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.252914   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.252935   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.253007   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.253012   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.253083   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.253140   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420527   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.420559   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.420564   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.420638   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420732   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.420791   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.420813   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591141   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.591197   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.591267   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.591337   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.591418   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591453   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.591505   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:34:11.721234   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:34:11.725967   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:34:11.731189   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:34:11.731276   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:34:11.742195   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:34:11.742224   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:34:11.742265   79191 cache_images.go:92] duration metric: took 996.507737ms to LoadCachedImages
	W0816 00:34:11.742327   79191 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0816 00:34:11.742342   79191 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0816 00:34:11.742464   79191 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-098619 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:11.742546   79191 ssh_runner.go:195] Run: crio config
	I0816 00:34:07.089462   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.090073   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.090099   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.089985   80407 retry.go:31] will retry after 666.260439ms: waiting for machine to come up
	I0816 00:34:07.757621   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.758156   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.758182   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.758105   80407 retry.go:31] will retry after 782.571604ms: waiting for machine to come up
	I0816 00:34:08.542021   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:08.542426   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:08.542475   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:08.542381   80407 retry.go:31] will retry after 840.347921ms: waiting for machine to come up
	I0816 00:34:09.384399   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:09.384866   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:09.384893   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:09.384824   80407 retry.go:31] will retry after 1.376690861s: waiting for machine to come up
	I0816 00:34:10.763158   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:10.763547   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:10.763573   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:10.763484   80407 retry.go:31] will retry after 1.237664711s: waiting for machine to come up
	I0816 00:34:10.426656   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:12.429312   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.354758   78747 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.354783   78747 pod_ready.go:82] duration metric: took 3.859210458s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.354796   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363323   78747 pod_ready.go:93] pod "kube-proxy-f99ds" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.363347   78747 pod_ready.go:82] duration metric: took 8.543406ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363359   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369799   78747 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.369826   78747 pod_ready.go:82] duration metric: took 6.458192ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369858   78747 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:13.376479   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.791749   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:34:11.791779   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:11.791791   79191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:11.791810   79191 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098619 NodeName:old-k8s-version-098619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:34:11.791969   79191 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-098619"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:11.792046   79191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:34:11.802572   79191 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:11.802649   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:11.812583   79191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 00:34:11.831551   79191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:11.852476   79191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 00:34:11.875116   79191 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:11.879833   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:11.893308   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:12.038989   79191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:12.061736   79191 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619 for IP: 192.168.72.137
	I0816 00:34:12.061761   79191 certs.go:194] generating shared ca certs ...
	I0816 00:34:12.061780   79191 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.061992   79191 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:12.062046   79191 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:12.062059   79191 certs.go:256] generating profile certs ...
	I0816 00:34:12.062193   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key
	I0816 00:34:12.062283   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4
	I0816 00:34:12.062343   79191 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key
	I0816 00:34:12.062485   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:12.062523   79191 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:12.062536   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:12.062579   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:12.062614   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:12.062658   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:12.062721   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:12.063630   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:12.106539   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:12.139393   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:12.171548   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:12.213113   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 00:34:12.244334   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:34:12.287340   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:12.331047   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:34:12.369666   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:12.397260   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:12.424009   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:12.450212   79191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:12.471550   79191 ssh_runner.go:195] Run: openssl version
	I0816 00:34:12.479821   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:12.494855   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500546   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500620   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.508817   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:12.521689   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:12.533904   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538789   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538946   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.546762   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:12.561940   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:12.575852   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582377   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582457   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.590772   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:12.604976   79191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:12.610332   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:12.617070   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:12.625769   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:12.634342   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:12.641486   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:12.650090   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:12.658206   79191 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:12.658306   79191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:12.658392   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.703323   79191 cri.go:89] found id: ""
	I0816 00:34:12.703399   79191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:12.714950   79191 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:12.714970   79191 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:12.715047   79191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:12.727051   79191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:12.728059   79191 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:12.728655   79191 kubeconfig.go:62] /home/jenkins/minikube-integration/19452-12919/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098619" cluster setting kubeconfig missing "old-k8s-version-098619" context setting]
	I0816 00:34:12.729552   79191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.731269   79191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:12.744732   79191 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0816 00:34:12.744766   79191 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:12.744777   79191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:12.744833   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.783356   79191 cri.go:89] found id: ""
	I0816 00:34:12.783432   79191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:12.801942   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:12.816412   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:12.816433   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:12.816480   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:12.827686   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:12.827757   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:12.838063   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:12.847714   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:12.847808   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:12.858274   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.869328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:12.869389   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.881457   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:12.892256   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:12.892325   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:12.902115   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:12.912484   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.040145   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.851639   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.085396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.208430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.321003   79191 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:14.321084   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:14.822130   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.321780   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.822121   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:16.322077   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:12.002977   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:12.003441   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:12.003470   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:12.003401   80407 retry.go:31] will retry after 1.413320186s: waiting for machine to come up
	I0816 00:34:13.418972   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:13.419346   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:13.419374   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:13.419284   80407 retry.go:31] will retry after 2.055525842s: waiting for machine to come up
	I0816 00:34:15.476550   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:15.477044   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:15.477072   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:15.477021   80407 retry.go:31] will retry after 2.728500649s: waiting for machine to come up
	I0816 00:34:14.926133   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.930322   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:15.377291   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:17.877627   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.821714   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.321166   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.821648   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.321711   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.821520   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.321732   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.821325   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.321783   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.821958   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:21.321139   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.208958   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:18.209350   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:18.209379   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:18.209302   80407 retry.go:31] will retry after 3.922749943s: waiting for machine to come up
	I0816 00:34:19.426265   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.926480   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.134804   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135230   78489 main.go:141] libmachine: (no-preload-819398) Found IP for machine: 192.168.61.15
	I0816 00:34:22.135266   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has current primary IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135292   78489 main.go:141] libmachine: (no-preload-819398) Reserving static IP address...
	I0816 00:34:22.135596   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.135629   78489 main.go:141] libmachine: (no-preload-819398) DBG | skip adding static IP to network mk-no-preload-819398 - found existing host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"}
	I0816 00:34:22.135644   78489 main.go:141] libmachine: (no-preload-819398) Reserved static IP address: 192.168.61.15
	I0816 00:34:22.135661   78489 main.go:141] libmachine: (no-preload-819398) Waiting for SSH to be available...
	I0816 00:34:22.135675   78489 main.go:141] libmachine: (no-preload-819398) DBG | Getting to WaitForSSH function...
	I0816 00:34:22.137639   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.137925   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.137956   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.138099   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH client type: external
	I0816 00:34:22.138141   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa (-rw-------)
	I0816 00:34:22.138198   78489 main.go:141] libmachine: (no-preload-819398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:22.138233   78489 main.go:141] libmachine: (no-preload-819398) DBG | About to run SSH command:
	I0816 00:34:22.138248   78489 main.go:141] libmachine: (no-preload-819398) DBG | exit 0
	I0816 00:34:22.262094   78489 main.go:141] libmachine: (no-preload-819398) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:22.262496   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetConfigRaw
	I0816 00:34:22.263081   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.265419   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.265746   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.265782   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.266097   78489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/config.json ...
	I0816 00:34:22.266283   78489 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:22.266301   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:22.266501   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.268848   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269269   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.269308   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269356   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.269537   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269684   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269803   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.269971   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.270185   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.270197   78489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:22.374848   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:22.374880   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375169   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:34:22.375195   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375407   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.378309   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378649   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.378678   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378853   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.379060   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379203   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379362   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.379568   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.379735   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.379749   78489 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-819398 && echo "no-preload-819398" | sudo tee /etc/hostname
	I0816 00:34:22.496438   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-819398
	
	I0816 00:34:22.496467   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.499101   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499411   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.499443   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499703   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.499912   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500116   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500247   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.500419   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.500624   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.500650   78489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-819398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-819398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-819398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:22.619769   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:22.619802   78489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:22.619826   78489 buildroot.go:174] setting up certificates
	I0816 00:34:22.619837   78489 provision.go:84] configureAuth start
	I0816 00:34:22.619847   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.620106   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.623130   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623485   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.623510   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623629   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.625964   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626308   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.626335   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626475   78489 provision.go:143] copyHostCerts
	I0816 00:34:22.626536   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:22.626557   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:22.626629   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:22.626756   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:22.626768   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:22.626798   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:22.626889   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:22.626899   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:22.626925   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:22.627008   78489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.no-preload-819398 san=[127.0.0.1 192.168.61.15 localhost minikube no-preload-819398]
	I0816 00:34:22.710036   78489 provision.go:177] copyRemoteCerts
	I0816 00:34:22.710093   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:22.710120   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.712944   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713380   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.713409   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713612   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.713780   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.713926   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.714082   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:22.800996   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:22.828264   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:34:22.855258   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:22.880981   78489 provision.go:87] duration metric: took 261.134406ms to configureAuth
	I0816 00:34:22.881013   78489 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:22.881176   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:22.881240   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.883962   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884348   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.884368   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884611   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.884828   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885052   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885248   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.885448   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.885639   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.885661   78489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:23.154764   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:23.154802   78489 machine.go:96] duration metric: took 888.504728ms to provisionDockerMachine
	I0816 00:34:23.154821   78489 start.go:293] postStartSetup for "no-preload-819398" (driver="kvm2")
	I0816 00:34:23.154837   78489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:23.154860   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.155176   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:23.155205   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.158105   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158482   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.158517   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158674   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.158864   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.159039   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.159198   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.241041   78489 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:23.245237   78489 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:23.245260   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:23.245324   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:23.245398   78489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:23.245480   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:23.254735   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:23.279620   78489 start.go:296] duration metric: took 124.783636ms for postStartSetup
	I0816 00:34:23.279668   78489 fix.go:56] duration metric: took 19.100951861s for fixHost
	I0816 00:34:23.279693   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.282497   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.282959   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.282981   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.283184   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.283376   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283514   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283687   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.283870   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:23.284027   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:23.284037   78489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:23.390632   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768463.360038650
	
	I0816 00:34:23.390658   78489 fix.go:216] guest clock: 1723768463.360038650
	I0816 00:34:23.390668   78489 fix.go:229] Guest: 2024-08-16 00:34:23.36003865 +0000 UTC Remote: 2024-08-16 00:34:23.27967333 +0000 UTC m=+356.445975156 (delta=80.36532ms)
	I0816 00:34:23.390697   78489 fix.go:200] guest clock delta is within tolerance: 80.36532ms
	I0816 00:34:23.390710   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 19.212026147s
	I0816 00:34:23.390729   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.390977   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:23.393728   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394050   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.394071   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394255   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394722   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394895   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394977   78489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:23.395028   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.395135   78489 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:23.395151   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.397773   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.397939   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398196   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398237   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398354   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398480   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398507   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398515   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398717   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.398722   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398887   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398884   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.399029   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.399164   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.497983   78489 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:23.503896   78489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:23.660357   78489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:23.666714   78489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:23.666775   78489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:23.684565   78489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:23.684586   78489 start.go:495] detecting cgroup driver to use...
	I0816 00:34:23.684655   78489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:23.701981   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:23.715786   78489 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:23.715852   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:23.733513   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:23.748705   78489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:23.866341   78489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:24.016845   78489 docker.go:233] disabling docker service ...
	I0816 00:34:24.016918   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:24.032673   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:24.046465   78489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:24.184862   78489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:24.309066   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:24.323818   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:24.344352   78489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:34:24.344422   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.355015   78489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:24.355093   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.365665   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.377238   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.388619   78489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:24.399306   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.410087   78489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.428465   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.439026   78489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:24.448856   78489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:24.448943   78489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:24.463002   78489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:24.473030   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:24.587542   78489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:24.719072   78489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:24.719159   78489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:24.723789   78489 start.go:563] Will wait 60s for crictl version
	I0816 00:34:24.723842   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:24.727616   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:24.766517   78489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:24.766600   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.795204   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.824529   78489 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:34:20.376278   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.376510   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:24.876314   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.822114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.321350   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.821541   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.322014   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.821938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.321883   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.821178   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.321881   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.821199   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:26.321573   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.825725   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:24.828458   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829018   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:24.829045   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829336   78489 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:24.833711   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:24.847017   78489 kubeadm.go:883] updating cluster {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:24.847136   78489 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:34:24.847171   78489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:24.883489   78489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:34:24.883515   78489 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:24.883592   78489 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.883612   78489 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.883664   78489 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:24.883690   78489 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.883719   78489 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.883595   78489 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.883927   78489 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.884016   78489 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885061   78489 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.885185   78489 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885207   78489 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.885204   78489 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.885225   78489 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.042311   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.042317   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.048181   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.050502   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.059137   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.091688   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 00:34:25.096653   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.126261   78489 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 00:34:25.126311   78489 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.126368   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.164673   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.189972   78489 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 00:34:25.190014   78489 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.190051   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249632   78489 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 00:34:25.249674   78489 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.249717   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249780   78489 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 00:34:25.249824   78489 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.249884   78489 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 00:34:25.249910   78489 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.249887   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249942   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360038   78489 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 00:34:25.360082   78489 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.360121   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360133   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.360191   78489 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 00:34:25.360208   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.360221   78489 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.360256   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360283   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.360326   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.360337   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.462610   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.462691   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.480037   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.480114   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.480176   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.480211   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.489343   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.642853   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.642913   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.642963   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.645719   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.645749   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.645833   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.645899   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.802574   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 00:34:25.802645   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.802687   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.802728   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.808235   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 00:34:25.808330   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 00:34:25.808387   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 00:34:25.808401   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 00:34:25.808432   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:25.808334   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:25.808471   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:25.808480   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:25.816510   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 00:34:25.816527   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.816560   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.885445   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 00:34:25.885532   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 00:34:25.885549   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:25.885588   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 00:34:25.885600   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:25.885674   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 00:34:25.885690   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 00:34:25.885711   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 00:34:24.426102   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.927534   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.877013   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:29.378108   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.821489   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.322094   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.321201   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.821854   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.321188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.821729   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.321316   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.821998   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:31.322184   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.938767   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.122182459s)
	I0816 00:34:27.938804   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 00:34:27.938801   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.05323098s)
	I0816 00:34:27.938826   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.05321158s)
	I0816 00:34:27.938831   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 00:34:27.938833   78489 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:27.938843   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 00:34:27.938906   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:31.645449   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.706515577s)
	I0816 00:34:31.645486   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 00:34:31.645514   78489 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:31.645563   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:29.427463   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.927253   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.875608   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:33.876822   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.821361   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.321205   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.822088   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.322126   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.821956   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.321921   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.821245   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.822034   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:36.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.625714   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.980118908s)
	I0816 00:34:33.625749   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 00:34:33.625773   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:33.625824   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:35.680134   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054281396s)
	I0816 00:34:35.680167   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 00:34:35.680209   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:35.680276   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:34.426416   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.427589   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:38.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:35.877327   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:37.877385   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.821567   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.321329   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.822169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.321832   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.821404   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.321406   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.821914   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.322169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.821149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:41.322125   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.430152   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.749849436s)
	I0816 00:34:37.430180   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 00:34:37.430208   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:37.430254   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:39.684335   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.254047221s)
	I0816 00:34:39.684365   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 00:34:39.684391   78489 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:39.684445   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:40.328672   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 00:34:40.328722   78489 cache_images.go:123] Successfully loaded all cached images
	I0816 00:34:40.328729   78489 cache_images.go:92] duration metric: took 15.445200533s to LoadCachedImages
	I0816 00:34:40.328743   78489 kubeadm.go:934] updating node { 192.168.61.15 8443 v1.31.0 crio true true} ...
	I0816 00:34:40.328897   78489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-819398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:40.328994   78489 ssh_runner.go:195] Run: crio config
	I0816 00:34:40.383655   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:40.383675   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:40.383685   78489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:40.383712   78489 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.15 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-819398 NodeName:no-preload-819398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:34:40.383855   78489 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-819398"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:40.383930   78489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:34:40.395384   78489 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:40.395457   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:40.405037   78489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 00:34:40.423278   78489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:40.440963   78489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 00:34:40.458845   78489 ssh_runner.go:195] Run: grep 192.168.61.15	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:40.462574   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:40.475524   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:40.614624   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:40.632229   78489 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398 for IP: 192.168.61.15
	I0816 00:34:40.632252   78489 certs.go:194] generating shared ca certs ...
	I0816 00:34:40.632267   78489 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:40.632430   78489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:40.632483   78489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:40.632497   78489 certs.go:256] generating profile certs ...
	I0816 00:34:40.632598   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/client.key
	I0816 00:34:40.632679   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key.a9de72ef
	I0816 00:34:40.632759   78489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key
	I0816 00:34:40.632919   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:40.632962   78489 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:40.632978   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:40.633011   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:40.633042   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:40.633068   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:40.633124   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:40.633963   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:40.676094   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:40.707032   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:40.740455   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:40.778080   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 00:34:40.809950   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:34:40.841459   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:40.866708   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:34:40.893568   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:40.917144   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:40.942349   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:40.966731   78489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:40.984268   78489 ssh_runner.go:195] Run: openssl version
	I0816 00:34:40.990614   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:41.002909   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007595   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007645   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.013618   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:41.024886   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:41.036350   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040801   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040845   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.046554   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:41.057707   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:41.069566   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074107   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074159   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.080113   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:41.091854   78489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:41.096543   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:41.102883   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:41.109228   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:41.115622   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:41.121895   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:41.128016   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:41.134126   78489 kubeadm.go:392] StartCluster: {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:41.134230   78489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:41.134310   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.178898   78489 cri.go:89] found id: ""
	I0816 00:34:41.178972   78489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:41.190167   78489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:41.190184   78489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:41.190223   78489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:41.200385   78489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:41.201824   78489 kubeconfig.go:125] found "no-preload-819398" server: "https://192.168.61.15:8443"
	I0816 00:34:41.204812   78489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:41.225215   78489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.15
	I0816 00:34:41.225252   78489 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:41.225265   78489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:41.225323   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.269288   78489 cri.go:89] found id: ""
	I0816 00:34:41.269377   78489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:41.286238   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:41.297713   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:41.297732   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:41.297782   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:41.308635   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:41.308695   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:41.320045   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:41.329866   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:41.329952   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:41.341488   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.351018   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:41.351083   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.360845   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:41.370730   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:41.370808   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:41.382572   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:41.392544   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.515558   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.425671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:43.426507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:40.377638   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:42.877395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:41.821459   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.321938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.822038   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.321447   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.821571   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.321428   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.821496   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:46.322149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.610068   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.094473643s)
	I0816 00:34:42.610106   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.850562   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.916519   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:43.042025   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:43.042117   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.543065   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.043098   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.061154   78489 api_server.go:72] duration metric: took 1.019134992s to wait for apiserver process to appear ...
	I0816 00:34:44.061180   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:34:44.061199   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.718683   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.718717   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:46.718730   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.785528   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.785559   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:47.061692   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.066556   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.066590   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:47.562057   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.569664   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.569699   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:48.061258   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:48.065926   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:34:48.073136   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:34:48.073165   78489 api_server.go:131] duration metric: took 4.011977616s to wait for apiserver health ...
	I0816 00:34:48.073179   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:48.073189   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:48.075105   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:34:45.925817   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.925984   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:45.376424   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.377794   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.876764   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:46.822140   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.321575   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.321365   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.822009   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.321536   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.821189   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.321387   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.821982   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:51.322075   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.076340   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:34:48.113148   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:34:48.152316   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:34:48.166108   78489 system_pods.go:59] 8 kube-system pods found
	I0816 00:34:48.166142   78489 system_pods.go:61] "coredns-6f6b679f8f-sv454" [5ba1d55f-4455-4ad1-b3c8-7671ce481dd2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:34:48.166154   78489 system_pods.go:61] "etcd-no-preload-819398" [b5e55df3-fb20-4980-928f-31217bf25351] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:34:48.166164   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [7670f41c-8439-4782-a3c8-077a144d2998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:34:48.166175   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [61a6080a-5e65-4400-b230-0703f347fc17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:34:48.166182   78489 system_pods.go:61] "kube-proxy-xdm7w" [9d0517c5-8cf7-47a0-86d0-c674677e9f46] Running
	I0816 00:34:48.166191   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [af346e37-312a-4225-b3bf-0ddda71022dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:34:48.166204   78489 system_pods.go:61] "metrics-server-6867b74b74-mm5l7" [2ebc3f9f-e1a7-47b6-849e-6a4995d13206] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:34:48.166214   78489 system_pods.go:61] "storage-provisioner" [745bbfbd-aedb-4e68-946e-5a7ead1d5b48] Running
	I0816 00:34:48.166223   78489 system_pods.go:74] duration metric: took 13.883212ms to wait for pod list to return data ...
	I0816 00:34:48.166235   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:34:48.170444   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:34:48.170478   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:34:48.170492   78489 node_conditions.go:105] duration metric: took 4.251703ms to run NodePressure ...
	I0816 00:34:48.170520   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:48.437519   78489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:34:48.441992   78489 kubeadm.go:739] kubelet initialised
	I0816 00:34:48.442015   78489 kubeadm.go:740] duration metric: took 4.465986ms waiting for restarted kubelet to initialise ...
	I0816 00:34:48.442025   78489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:48.447127   78489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:50.453956   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.926184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.926515   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.876909   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.376236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.822066   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.321534   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.821154   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.321256   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.821510   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.321984   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.821175   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.321601   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:56.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.454122   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.954716   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.426224   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.926472   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.376394   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:58.876502   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.821891   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.321266   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.821346   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.321718   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.821304   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.821302   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.821563   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:01.321323   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.453951   78489 pod_ready.go:93] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:57.453974   78489 pod_ready.go:82] duration metric: took 9.00682228s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:57.453983   78489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.460582   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.961243   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:00.961269   78489 pod_ready.go:82] duration metric: took 3.507278873s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:00.961279   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468020   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:01.468047   78489 pod_ready.go:82] duration metric: took 506.758881ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468060   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.425956   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.925967   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.876678   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:03.376662   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.821317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.321560   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.821707   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.322110   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.821327   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.321430   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.821935   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.321559   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.821373   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.975498   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.975522   78489 pod_ready.go:82] duration metric: took 1.50745395s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.975531   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980290   78489 pod_ready.go:93] pod "kube-proxy-xdm7w" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.980316   78489 pod_ready.go:82] duration metric: took 4.778704ms for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980328   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988237   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.988260   78489 pod_ready.go:82] duration metric: took 7.924207ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988268   78489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:04.993992   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:04.426419   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.426648   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.927578   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:05.877102   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:07.877187   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.821405   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.321781   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.821420   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.321483   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.821347   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.321167   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.821188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.821179   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:11.322114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.994539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.995530   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.494248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.425605   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:13.426338   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:10.378729   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:12.875673   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:14.876717   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.822105   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.321963   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.822172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.321805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.821971   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:14.321784   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:14.321882   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:14.360939   79191 cri.go:89] found id: ""
	I0816 00:35:14.360962   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.360971   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:14.360976   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:14.361028   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:14.397796   79191 cri.go:89] found id: ""
	I0816 00:35:14.397824   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.397836   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:14.397858   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:14.397922   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:14.433924   79191 cri.go:89] found id: ""
	I0816 00:35:14.433950   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.433960   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:14.433968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:14.434024   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:14.468657   79191 cri.go:89] found id: ""
	I0816 00:35:14.468685   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.468696   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:14.468704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:14.468770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:14.505221   79191 cri.go:89] found id: ""
	I0816 00:35:14.505247   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.505256   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:14.505264   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:14.505323   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:14.546032   79191 cri.go:89] found id: ""
	I0816 00:35:14.546062   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.546072   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:14.546079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:14.546147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:14.581260   79191 cri.go:89] found id: ""
	I0816 00:35:14.581284   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.581292   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:14.581298   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:14.581352   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:14.616103   79191 cri.go:89] found id: ""
	I0816 00:35:14.616127   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.616134   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:14.616142   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:14.616153   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:14.690062   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:14.690106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:14.735662   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:14.735699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:14.786049   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:14.786086   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:14.800375   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:14.800405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:14.931822   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:13.494676   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.497759   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.925671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.926279   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.375842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.376005   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.432686   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:17.448728   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:17.448806   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:17.496384   79191 cri.go:89] found id: ""
	I0816 00:35:17.496523   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.496568   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:17.496581   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:17.496646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:17.560779   79191 cri.go:89] found id: ""
	I0816 00:35:17.560810   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.560820   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:17.560829   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:17.560891   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:17.606007   79191 cri.go:89] found id: ""
	I0816 00:35:17.606036   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.606047   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:17.606054   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:17.606123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:17.639910   79191 cri.go:89] found id: ""
	I0816 00:35:17.639937   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.639945   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:17.639951   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:17.640030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:17.676534   79191 cri.go:89] found id: ""
	I0816 00:35:17.676563   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.676573   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:17.676581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:17.676645   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:17.716233   79191 cri.go:89] found id: ""
	I0816 00:35:17.716255   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.716262   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:17.716268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:17.716334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:17.753648   79191 cri.go:89] found id: ""
	I0816 00:35:17.753686   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.753696   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:17.753704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:17.753763   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:17.791670   79191 cri.go:89] found id: ""
	I0816 00:35:17.791694   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.791702   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:17.791711   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:17.791722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:17.840616   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:17.840650   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:17.854949   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:17.854981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:17.933699   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:17.933724   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:17.933750   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:18.010177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:18.010211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:20.551384   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:20.564463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:20.564540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:20.604361   79191 cri.go:89] found id: ""
	I0816 00:35:20.604389   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.604399   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:20.604405   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:20.604453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:20.639502   79191 cri.go:89] found id: ""
	I0816 00:35:20.639528   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.639535   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:20.639541   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:20.639590   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:20.676430   79191 cri.go:89] found id: ""
	I0816 00:35:20.676476   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.676484   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:20.676496   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:20.676551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:20.711213   79191 cri.go:89] found id: ""
	I0816 00:35:20.711243   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.711253   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:20.711261   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:20.711320   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:20.745533   79191 cri.go:89] found id: ""
	I0816 00:35:20.745563   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.745574   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:20.745581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:20.745644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:20.781031   79191 cri.go:89] found id: ""
	I0816 00:35:20.781056   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.781064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:20.781071   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:20.781119   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:20.819966   79191 cri.go:89] found id: ""
	I0816 00:35:20.819994   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.820005   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:20.820012   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:20.820096   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:20.859011   79191 cri.go:89] found id: ""
	I0816 00:35:20.859041   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.859052   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:20.859063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:20.859078   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:20.909479   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:20.909513   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:20.925627   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:20.925653   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:21.001707   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:21.001733   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:21.001747   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:21.085853   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:21.085893   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:17.994492   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:20.496255   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:22.426663   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:21.878587   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.377462   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:23.626499   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:23.640337   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:23.640395   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:23.679422   79191 cri.go:89] found id: ""
	I0816 00:35:23.679449   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.679457   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:23.679463   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:23.679522   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:23.716571   79191 cri.go:89] found id: ""
	I0816 00:35:23.716594   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.716601   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:23.716607   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:23.716660   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:23.752539   79191 cri.go:89] found id: ""
	I0816 00:35:23.752563   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.752573   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:23.752581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:23.752640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:23.790665   79191 cri.go:89] found id: ""
	I0816 00:35:23.790693   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.790700   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:23.790707   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:23.790757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:23.827695   79191 cri.go:89] found id: ""
	I0816 00:35:23.827719   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.827727   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:23.827733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:23.827792   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:23.867664   79191 cri.go:89] found id: ""
	I0816 00:35:23.867687   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.867695   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:23.867701   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:23.867776   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:23.907844   79191 cri.go:89] found id: ""
	I0816 00:35:23.907871   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.907882   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:23.907890   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:23.907951   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:23.945372   79191 cri.go:89] found id: ""
	I0816 00:35:23.945403   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.945414   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:23.945424   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:23.945438   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:23.998270   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:23.998302   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:24.012794   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:24.012824   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:24.087285   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:24.087308   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:24.087340   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:24.167151   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:24.167184   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:26.710285   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:26.724394   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:26.724453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:26.764667   79191 cri.go:89] found id: ""
	I0816 00:35:26.764690   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.764698   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:26.764704   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:26.764756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:22.994036   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.995035   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.927042   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:27.426054   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.877007   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.376563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.806631   79191 cri.go:89] found id: ""
	I0816 00:35:26.806660   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.806670   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:26.806677   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:26.806741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:26.843434   79191 cri.go:89] found id: ""
	I0816 00:35:26.843473   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.843485   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:26.843493   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:26.843576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:26.882521   79191 cri.go:89] found id: ""
	I0816 00:35:26.882556   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.882566   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:26.882574   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:26.882635   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:26.917956   79191 cri.go:89] found id: ""
	I0816 00:35:26.917985   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.917995   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:26.918004   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:26.918056   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:26.953168   79191 cri.go:89] found id: ""
	I0816 00:35:26.953191   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.953199   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:26.953205   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:26.953251   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:26.991366   79191 cri.go:89] found id: ""
	I0816 00:35:26.991397   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.991408   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:26.991416   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:26.991479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:27.028591   79191 cri.go:89] found id: ""
	I0816 00:35:27.028619   79191 logs.go:276] 0 containers: []
	W0816 00:35:27.028626   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:27.028635   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:27.028647   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:27.111613   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:27.111645   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:27.153539   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:27.153575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:27.209377   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:27.209420   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:27.223316   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:27.223343   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:27.301411   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:29.801803   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:29.815545   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:29.815626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:29.853638   79191 cri.go:89] found id: ""
	I0816 00:35:29.853668   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.853678   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:29.853687   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:29.853756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:29.892532   79191 cri.go:89] found id: ""
	I0816 00:35:29.892554   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.892561   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:29.892567   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:29.892622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:29.932486   79191 cri.go:89] found id: ""
	I0816 00:35:29.932511   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.932519   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:29.932524   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:29.932580   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:29.973161   79191 cri.go:89] found id: ""
	I0816 00:35:29.973194   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.973205   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:29.973213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:29.973275   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:30.009606   79191 cri.go:89] found id: ""
	I0816 00:35:30.009629   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.009637   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:30.009643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:30.009691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:30.045016   79191 cri.go:89] found id: ""
	I0816 00:35:30.045043   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.045050   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:30.045057   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:30.045113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:30.079934   79191 cri.go:89] found id: ""
	I0816 00:35:30.079959   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.079968   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:30.079974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:30.080030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:30.114173   79191 cri.go:89] found id: ""
	I0816 00:35:30.114199   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.114207   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:30.114216   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:30.114227   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:30.154765   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:30.154791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:30.204410   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:30.204442   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:30.218909   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:30.218934   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:30.294141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:30.294161   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:30.294193   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:26.995394   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.494569   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.426234   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.926349   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.926433   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.376976   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.377869   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:32.872216   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:32.886211   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:32.886289   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:32.929416   79191 cri.go:89] found id: ""
	I0816 00:35:32.929440   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.929449   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:32.929456   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:32.929520   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:32.977862   79191 cri.go:89] found id: ""
	I0816 00:35:32.977887   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.977896   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:32.977920   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:32.977978   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:33.015569   79191 cri.go:89] found id: ""
	I0816 00:35:33.015593   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.015603   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:33.015622   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:33.015681   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:33.050900   79191 cri.go:89] found id: ""
	I0816 00:35:33.050934   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.050943   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:33.050959   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:33.051033   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:33.084529   79191 cri.go:89] found id: ""
	I0816 00:35:33.084556   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.084564   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:33.084569   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:33.084619   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:33.119819   79191 cri.go:89] found id: ""
	I0816 00:35:33.119845   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.119855   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:33.119863   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:33.119928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:33.159922   79191 cri.go:89] found id: ""
	I0816 00:35:33.159952   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.159959   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:33.159965   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:33.160023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:33.194977   79191 cri.go:89] found id: ""
	I0816 00:35:33.195006   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.195018   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:33.195030   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:33.195044   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:33.208578   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:33.208623   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:33.282177   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:33.282198   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:33.282211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:33.365514   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:33.365552   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:33.405190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:33.405226   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:35.959033   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:35.971866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:35.971934   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:36.008442   79191 cri.go:89] found id: ""
	I0816 00:35:36.008473   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.008483   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:36.008489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:36.008547   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:36.044346   79191 cri.go:89] found id: ""
	I0816 00:35:36.044374   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.044386   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:36.044393   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:36.044444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:36.083078   79191 cri.go:89] found id: ""
	I0816 00:35:36.083104   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.083112   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:36.083118   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:36.083166   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:36.120195   79191 cri.go:89] found id: ""
	I0816 00:35:36.120218   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.120226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:36.120232   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:36.120288   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:36.156186   79191 cri.go:89] found id: ""
	I0816 00:35:36.156215   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.156225   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:36.156233   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:36.156295   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:36.195585   79191 cri.go:89] found id: ""
	I0816 00:35:36.195613   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.195623   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:36.195631   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:36.195699   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:36.231110   79191 cri.go:89] found id: ""
	I0816 00:35:36.231133   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.231141   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:36.231147   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:36.231210   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:36.268745   79191 cri.go:89] found id: ""
	I0816 00:35:36.268770   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.268778   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:36.268786   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:36.268800   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:36.282225   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:36.282251   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:36.351401   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:36.351431   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:36.351447   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:36.429970   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:36.430003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:36.473745   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:36.473776   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:31.994163   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.994256   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.995188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:36.427247   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.926123   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.877303   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:39.027444   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:39.041107   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:39.041170   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:39.079807   79191 cri.go:89] found id: ""
	I0816 00:35:39.079830   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.079837   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:39.079843   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:39.079890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:39.115532   79191 cri.go:89] found id: ""
	I0816 00:35:39.115559   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.115569   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:39.115576   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:39.115623   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:39.150197   79191 cri.go:89] found id: ""
	I0816 00:35:39.150222   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.150233   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:39.150241   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:39.150300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:39.186480   79191 cri.go:89] found id: ""
	I0816 00:35:39.186507   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.186515   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:39.186521   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:39.186572   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:39.221576   79191 cri.go:89] found id: ""
	I0816 00:35:39.221605   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.221615   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:39.221620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:39.221669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:39.259846   79191 cri.go:89] found id: ""
	I0816 00:35:39.259877   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.259888   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:39.259896   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:39.259950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:39.294866   79191 cri.go:89] found id: ""
	I0816 00:35:39.294891   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.294898   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:39.294903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:39.294952   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:39.329546   79191 cri.go:89] found id: ""
	I0816 00:35:39.329576   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.329584   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:39.329593   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:39.329604   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:39.371579   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:39.371609   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:39.422903   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:39.422935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:39.437673   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:39.437699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:39.515146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:39.515171   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:39.515185   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:38.495377   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.495856   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.926444   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:43.426438   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.376648   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.877521   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.101733   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:42.115563   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:42.115640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:42.155187   79191 cri.go:89] found id: ""
	I0816 00:35:42.155216   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.155224   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:42.155230   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:42.155282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:42.194414   79191 cri.go:89] found id: ""
	I0816 00:35:42.194444   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.194456   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:42.194464   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:42.194523   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:42.234219   79191 cri.go:89] found id: ""
	I0816 00:35:42.234245   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.234253   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:42.234259   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:42.234314   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:42.272278   79191 cri.go:89] found id: ""
	I0816 00:35:42.272304   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.272314   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:42.272322   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:42.272381   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:42.309973   79191 cri.go:89] found id: ""
	I0816 00:35:42.309999   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.310007   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:42.310013   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:42.310066   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:42.350745   79191 cri.go:89] found id: ""
	I0816 00:35:42.350773   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.350782   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:42.350790   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:42.350853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:42.387775   79191 cri.go:89] found id: ""
	I0816 00:35:42.387803   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.387813   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:42.387832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:42.387902   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:42.425086   79191 cri.go:89] found id: ""
	I0816 00:35:42.425110   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.425118   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:42.425125   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:42.425138   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:42.515543   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:42.515575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:42.558348   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:42.558372   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:42.613026   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:42.613059   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.628907   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:42.628932   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:42.710265   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.211083   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:45.225001   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:45.225083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:45.258193   79191 cri.go:89] found id: ""
	I0816 00:35:45.258223   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.258232   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:45.258240   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:45.258297   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:45.294255   79191 cri.go:89] found id: ""
	I0816 00:35:45.294278   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.294286   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:45.294291   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:45.294335   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:45.329827   79191 cri.go:89] found id: ""
	I0816 00:35:45.329875   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.329886   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:45.329894   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:45.329944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:45.366095   79191 cri.go:89] found id: ""
	I0816 00:35:45.366124   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.366134   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:45.366141   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:45.366202   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:45.402367   79191 cri.go:89] found id: ""
	I0816 00:35:45.402390   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.402398   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:45.402403   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:45.402449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:45.439272   79191 cri.go:89] found id: ""
	I0816 00:35:45.439293   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.439300   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:45.439310   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:45.439358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:45.474351   79191 cri.go:89] found id: ""
	I0816 00:35:45.474380   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.474388   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:45.474393   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:45.474445   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:45.519636   79191 cri.go:89] found id: ""
	I0816 00:35:45.519661   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.519671   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:45.519680   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:45.519695   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:45.593425   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.593446   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:45.593458   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:45.668058   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:45.668095   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:45.716090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:45.716125   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:45.774177   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:45.774207   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.495914   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:44.996641   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.426740   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.925719   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.376025   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.376628   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.876035   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:48.288893   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:48.302256   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:48.302321   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:48.337001   79191 cri.go:89] found id: ""
	I0816 00:35:48.337030   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.337041   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:48.337048   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:48.337110   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:48.378341   79191 cri.go:89] found id: ""
	I0816 00:35:48.378367   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.378375   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:48.378384   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:48.378447   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:48.414304   79191 cri.go:89] found id: ""
	I0816 00:35:48.414383   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.414402   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:48.414410   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:48.414473   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:48.453946   79191 cri.go:89] found id: ""
	I0816 00:35:48.453969   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.453976   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:48.453982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:48.454036   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:48.489597   79191 cri.go:89] found id: ""
	I0816 00:35:48.489617   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.489623   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:48.489629   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:48.489672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:48.524195   79191 cri.go:89] found id: ""
	I0816 00:35:48.524222   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.524232   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:48.524239   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:48.524293   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:48.567854   79191 cri.go:89] found id: ""
	I0816 00:35:48.567880   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.567890   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:48.567897   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:48.567956   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:48.603494   79191 cri.go:89] found id: ""
	I0816 00:35:48.603520   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.603530   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:48.603540   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:48.603556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:48.642927   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:48.642960   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:48.693761   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:48.693791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:48.708790   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:48.708818   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:48.780072   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:48.780092   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:48.780106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.362108   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:51.376113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:51.376185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:51.413988   79191 cri.go:89] found id: ""
	I0816 00:35:51.414022   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.414033   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:51.414041   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:51.414101   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:51.460901   79191 cri.go:89] found id: ""
	I0816 00:35:51.460937   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.460948   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:51.460956   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:51.461019   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:51.497178   79191 cri.go:89] found id: ""
	I0816 00:35:51.497205   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.497215   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:51.497223   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:51.497365   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:51.534559   79191 cri.go:89] found id: ""
	I0816 00:35:51.534589   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.534600   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:51.534607   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:51.534668   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:51.570258   79191 cri.go:89] found id: ""
	I0816 00:35:51.570280   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.570287   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:51.570293   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:51.570356   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:51.609639   79191 cri.go:89] found id: ""
	I0816 00:35:51.609665   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.609675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:51.609683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:51.609742   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:51.645629   79191 cri.go:89] found id: ""
	I0816 00:35:51.645652   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.645659   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:51.645664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:51.645731   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:51.683325   79191 cri.go:89] found id: ""
	I0816 00:35:51.683344   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.683351   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:51.683358   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:51.683369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:51.739101   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:51.739133   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:51.753436   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:51.753466   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:35:47.494904   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.495416   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.926975   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:51.928318   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:52.376854   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.880623   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:35:51.831242   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:51.831268   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:51.831294   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.926924   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:51.926970   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:54.472667   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:54.486706   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:54.486785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:54.524180   79191 cri.go:89] found id: ""
	I0816 00:35:54.524203   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.524211   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:54.524216   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:54.524273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:54.563758   79191 cri.go:89] found id: ""
	I0816 00:35:54.563781   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.563788   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:54.563795   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:54.563859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:54.599442   79191 cri.go:89] found id: ""
	I0816 00:35:54.599471   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.599481   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:54.599488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:54.599553   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:54.633521   79191 cri.go:89] found id: ""
	I0816 00:35:54.633547   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.633558   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:54.633565   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:54.633628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:54.670036   79191 cri.go:89] found id: ""
	I0816 00:35:54.670064   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.670075   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:54.670083   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:54.670148   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:54.707565   79191 cri.go:89] found id: ""
	I0816 00:35:54.707587   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.707594   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:54.707600   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:54.707659   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:54.744500   79191 cri.go:89] found id: ""
	I0816 00:35:54.744530   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.744541   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:54.744548   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:54.744612   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:54.778964   79191 cri.go:89] found id: ""
	I0816 00:35:54.778988   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.778995   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:54.779007   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:54.779020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:54.831806   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:54.831838   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:54.845954   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:54.845979   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:54.921817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:54.921855   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:54.921871   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:55.006401   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:55.006439   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:51.996591   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.495673   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.427044   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:56.927184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.376333   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.548661   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:57.562489   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:57.562549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:57.597855   79191 cri.go:89] found id: ""
	I0816 00:35:57.597881   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.597891   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:57.597899   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:57.597961   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:57.634085   79191 cri.go:89] found id: ""
	I0816 00:35:57.634114   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.634126   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:57.634133   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:57.634193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:57.671748   79191 cri.go:89] found id: ""
	I0816 00:35:57.671779   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.671788   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:57.671795   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:57.671859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:57.708836   79191 cri.go:89] found id: ""
	I0816 00:35:57.708862   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.708870   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:57.708877   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:57.708940   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:57.744601   79191 cri.go:89] found id: ""
	I0816 00:35:57.744630   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.744639   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:57.744645   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:57.744706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:57.781888   79191 cri.go:89] found id: ""
	I0816 00:35:57.781919   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.781929   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:57.781937   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:57.781997   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:57.822612   79191 cri.go:89] found id: ""
	I0816 00:35:57.822634   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.822641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:57.822647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:57.822706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:57.873968   79191 cri.go:89] found id: ""
	I0816 00:35:57.873998   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.874008   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:57.874019   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:57.874037   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:57.896611   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:57.896643   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:57.995575   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:57.995597   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:57.995612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:58.077196   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:58.077230   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:58.116956   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:58.116985   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:00.664805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:00.678425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:00.678501   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:00.715522   79191 cri.go:89] found id: ""
	I0816 00:36:00.715548   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.715557   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:00.715562   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:00.715608   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:00.749892   79191 cri.go:89] found id: ""
	I0816 00:36:00.749920   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.749931   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:00.749938   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:00.750006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:00.787302   79191 cri.go:89] found id: ""
	I0816 00:36:00.787325   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.787332   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:00.787338   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:00.787392   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:00.821866   79191 cri.go:89] found id: ""
	I0816 00:36:00.821894   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.821906   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:00.821914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:00.821971   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:00.856346   79191 cri.go:89] found id: ""
	I0816 00:36:00.856369   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.856377   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:00.856382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:00.856431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:00.893569   79191 cri.go:89] found id: ""
	I0816 00:36:00.893596   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.893606   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:00.893614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:00.893677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:00.930342   79191 cri.go:89] found id: ""
	I0816 00:36:00.930367   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.930378   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:00.930386   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:00.930622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:00.966039   79191 cri.go:89] found id: ""
	I0816 00:36:00.966071   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.966085   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:00.966095   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:00.966109   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:01.045594   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:01.045631   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:01.089555   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:01.089586   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:01.141597   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:01.141633   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:01.156260   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:01.156286   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:01.230573   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:56.995077   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:58.995897   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.495116   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.426099   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.926011   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.927327   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.376842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.875993   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.730825   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:03.744766   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:03.744838   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:03.781095   79191 cri.go:89] found id: ""
	I0816 00:36:03.781124   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.781142   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:03.781150   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:03.781215   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:03.815637   79191 cri.go:89] found id: ""
	I0816 00:36:03.815669   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.815680   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:03.815687   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:03.815741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:03.850076   79191 cri.go:89] found id: ""
	I0816 00:36:03.850110   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.850122   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:03.850130   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:03.850185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:03.888840   79191 cri.go:89] found id: ""
	I0816 00:36:03.888863   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.888872   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:03.888879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:03.888941   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:03.928317   79191 cri.go:89] found id: ""
	I0816 00:36:03.928341   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.928350   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:03.928359   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:03.928413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:03.964709   79191 cri.go:89] found id: ""
	I0816 00:36:03.964741   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.964751   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:03.964760   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:03.964830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:03.999877   79191 cri.go:89] found id: ""
	I0816 00:36:03.999902   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.999912   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:03.999919   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:03.999981   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:04.036772   79191 cri.go:89] found id: ""
	I0816 00:36:04.036799   79191 logs.go:276] 0 containers: []
	W0816 00:36:04.036810   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:04.036820   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:04.036833   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:04.118843   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:04.118879   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:04.162491   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:04.162548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:04.215100   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:04.215134   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:04.229043   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:04.229069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:04.307480   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:03.495661   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.995711   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.426223   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:08.426470   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.876718   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:07.877431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.807640   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:06.821144   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:06.821203   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:06.857743   79191 cri.go:89] found id: ""
	I0816 00:36:06.857776   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.857786   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:06.857794   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:06.857872   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:06.895980   79191 cri.go:89] found id: ""
	I0816 00:36:06.896007   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.896018   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:06.896025   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:06.896090   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:06.935358   79191 cri.go:89] found id: ""
	I0816 00:36:06.935389   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.935399   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:06.935406   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:06.935461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:06.971533   79191 cri.go:89] found id: ""
	I0816 00:36:06.971561   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.971572   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:06.971580   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:06.971640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:07.007786   79191 cri.go:89] found id: ""
	I0816 00:36:07.007812   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.007823   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:07.007830   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:07.007890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:07.044060   79191 cri.go:89] found id: ""
	I0816 00:36:07.044092   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.044104   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:07.044112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:07.044185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:07.080058   79191 cri.go:89] found id: ""
	I0816 00:36:07.080085   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.080094   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:07.080101   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:07.080156   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:07.117749   79191 cri.go:89] found id: ""
	I0816 00:36:07.117773   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.117780   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:07.117787   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:07.117799   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:07.171418   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:07.171453   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:07.185520   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:07.185542   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:07.257817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:07.257872   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:07.257888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:07.339530   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:07.339576   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:09.882613   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:09.895873   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:09.895950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:09.936739   79191 cri.go:89] found id: ""
	I0816 00:36:09.936766   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.936774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:09.936780   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:09.936836   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:09.974145   79191 cri.go:89] found id: ""
	I0816 00:36:09.974168   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.974180   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:09.974186   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:09.974243   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:10.012166   79191 cri.go:89] found id: ""
	I0816 00:36:10.012196   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.012206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:10.012214   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:10.012265   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:10.051080   79191 cri.go:89] found id: ""
	I0816 00:36:10.051103   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.051111   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:10.051117   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:10.051176   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:10.088519   79191 cri.go:89] found id: ""
	I0816 00:36:10.088548   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.088559   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:10.088567   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:10.088628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:10.123718   79191 cri.go:89] found id: ""
	I0816 00:36:10.123744   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.123752   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:10.123758   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:10.123805   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:10.161900   79191 cri.go:89] found id: ""
	I0816 00:36:10.161922   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.161929   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:10.161995   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:10.162064   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:10.196380   79191 cri.go:89] found id: ""
	I0816 00:36:10.196408   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.196419   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:10.196429   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:10.196443   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:10.248276   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:10.248309   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:10.262241   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:10.262269   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:10.340562   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:10.340598   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:10.340626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:10.417547   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:10.417578   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:07.996930   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:09.997666   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.426502   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.426976   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.377172   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.877236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.962310   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:12.976278   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:12.976338   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:13.014501   79191 cri.go:89] found id: ""
	I0816 00:36:13.014523   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.014530   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:13.014536   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:13.014587   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:13.055942   79191 cri.go:89] found id: ""
	I0816 00:36:13.055970   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.055979   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:13.055987   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:13.056048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:13.090309   79191 cri.go:89] found id: ""
	I0816 00:36:13.090336   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.090346   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:13.090354   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:13.090413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:13.124839   79191 cri.go:89] found id: ""
	I0816 00:36:13.124865   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.124876   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:13.124884   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:13.124945   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:13.164535   79191 cri.go:89] found id: ""
	I0816 00:36:13.164560   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.164567   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:13.164573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:13.164630   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:13.198651   79191 cri.go:89] found id: ""
	I0816 00:36:13.198699   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.198710   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:13.198718   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:13.198785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:13.233255   79191 cri.go:89] found id: ""
	I0816 00:36:13.233278   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.233286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:13.233292   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:13.233348   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:13.267327   79191 cri.go:89] found id: ""
	I0816 00:36:13.267351   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.267359   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:13.267367   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:13.267384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:13.352053   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:13.352089   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:13.393438   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:13.393471   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:13.445397   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:13.445430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:13.459143   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:13.459177   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:13.530160   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.031296   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:16.045557   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:16.045618   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:16.081828   79191 cri.go:89] found id: ""
	I0816 00:36:16.081871   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.081882   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:16.081890   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:16.081949   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:16.116228   79191 cri.go:89] found id: ""
	I0816 00:36:16.116254   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.116264   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:16.116272   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:16.116334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:16.150051   79191 cri.go:89] found id: ""
	I0816 00:36:16.150079   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.150087   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:16.150093   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:16.150139   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:16.186218   79191 cri.go:89] found id: ""
	I0816 00:36:16.186241   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.186248   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:16.186254   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:16.186301   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:16.223223   79191 cri.go:89] found id: ""
	I0816 00:36:16.223255   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.223263   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:16.223270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:16.223316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:16.259929   79191 cri.go:89] found id: ""
	I0816 00:36:16.259953   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.259960   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:16.259970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:16.260099   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:16.294611   79191 cri.go:89] found id: ""
	I0816 00:36:16.294633   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.294641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:16.294649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:16.294725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:16.333492   79191 cri.go:89] found id: ""
	I0816 00:36:16.333523   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.333533   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:16.333544   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:16.333563   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:16.385970   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:16.386002   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:16.400359   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:16.400384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:16.471363   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.471388   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:16.471408   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:16.555990   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:16.556022   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:12.495406   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.995145   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.926160   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.426768   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:15.376672   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.876395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.876542   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.099502   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:19.112649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:19.112706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:19.145809   79191 cri.go:89] found id: ""
	I0816 00:36:19.145837   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.145858   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:19.145865   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:19.145928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:19.183737   79191 cri.go:89] found id: ""
	I0816 00:36:19.183763   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.183774   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:19.183781   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:19.183841   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:19.219729   79191 cri.go:89] found id: ""
	I0816 00:36:19.219756   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.219764   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:19.219770   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:19.219815   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:19.254450   79191 cri.go:89] found id: ""
	I0816 00:36:19.254474   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.254481   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:19.254488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:19.254540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:19.289543   79191 cri.go:89] found id: ""
	I0816 00:36:19.289573   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.289585   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:19.289592   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:19.289651   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:19.330727   79191 cri.go:89] found id: ""
	I0816 00:36:19.330748   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.330756   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:19.330762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:19.330809   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:19.368952   79191 cri.go:89] found id: ""
	I0816 00:36:19.368978   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.368986   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:19.368992   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:19.369048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:19.406211   79191 cri.go:89] found id: ""
	I0816 00:36:19.406247   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.406258   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:19.406268   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:19.406282   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:19.457996   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:19.458032   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:19.472247   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:19.472274   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:19.542840   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:19.542862   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:19.542876   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:19.624478   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:19.624520   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:16.997148   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.496434   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.427251   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:21.925550   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.925858   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.376318   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:24.376431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.165884   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:22.180005   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:22.180078   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:22.217434   79191 cri.go:89] found id: ""
	I0816 00:36:22.217463   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.217471   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:22.217478   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:22.217534   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:22.250679   79191 cri.go:89] found id: ""
	I0816 00:36:22.250708   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.250717   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:22.250725   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:22.250785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:22.284294   79191 cri.go:89] found id: ""
	I0816 00:36:22.284324   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.284334   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:22.284341   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:22.284403   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:22.320747   79191 cri.go:89] found id: ""
	I0816 00:36:22.320779   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.320790   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:22.320799   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:22.320858   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:22.355763   79191 cri.go:89] found id: ""
	I0816 00:36:22.355793   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.355803   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:22.355811   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:22.355871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:22.392762   79191 cri.go:89] found id: ""
	I0816 00:36:22.392788   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.392796   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:22.392802   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:22.392860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:22.426577   79191 cri.go:89] found id: ""
	I0816 00:36:22.426605   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.426614   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:22.426621   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:22.426682   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:22.459989   79191 cri.go:89] found id: ""
	I0816 00:36:22.460018   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.460030   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:22.460040   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:22.460054   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:22.545782   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:22.545820   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:22.587404   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:22.587431   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:22.638519   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:22.638559   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:22.653064   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:22.653087   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:22.734333   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.234823   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:25.248716   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:25.248787   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:25.284760   79191 cri.go:89] found id: ""
	I0816 00:36:25.284786   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.284793   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:25.284799   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:25.284870   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:25.325523   79191 cri.go:89] found id: ""
	I0816 00:36:25.325548   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.325556   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:25.325562   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:25.325621   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:25.365050   79191 cri.go:89] found id: ""
	I0816 00:36:25.365078   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.365088   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:25.365096   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:25.365155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:25.405005   79191 cri.go:89] found id: ""
	I0816 00:36:25.405038   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.405049   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:25.405062   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:25.405121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:25.444622   79191 cri.go:89] found id: ""
	I0816 00:36:25.444648   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.444656   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:25.444662   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:25.444710   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:25.485364   79191 cri.go:89] found id: ""
	I0816 00:36:25.485394   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.485404   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:25.485413   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:25.485492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:25.521444   79191 cri.go:89] found id: ""
	I0816 00:36:25.521471   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.521482   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:25.521490   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:25.521550   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:25.556763   79191 cri.go:89] found id: ""
	I0816 00:36:25.556789   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.556796   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:25.556805   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:25.556817   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:25.606725   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:25.606759   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:25.623080   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:25.623108   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:25.705238   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.705258   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:25.705280   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:25.782188   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:25.782224   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:21.994519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.995061   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.494442   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:25.926835   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.427012   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.876206   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.876563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.325018   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:28.337778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:28.337860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:28.378452   79191 cri.go:89] found id: ""
	I0816 00:36:28.378482   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.378492   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:28.378499   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:28.378556   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:28.412103   79191 cri.go:89] found id: ""
	I0816 00:36:28.412132   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.412143   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:28.412150   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:28.412214   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:28.447363   79191 cri.go:89] found id: ""
	I0816 00:36:28.447388   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.447396   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:28.447401   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:28.447452   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:28.481199   79191 cri.go:89] found id: ""
	I0816 00:36:28.481228   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.481242   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:28.481251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:28.481305   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:28.517523   79191 cri.go:89] found id: ""
	I0816 00:36:28.517545   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.517552   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:28.517558   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:28.517620   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:28.552069   79191 cri.go:89] found id: ""
	I0816 00:36:28.552101   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.552112   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:28.552120   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:28.552193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:28.594124   79191 cri.go:89] found id: ""
	I0816 00:36:28.594148   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.594158   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:28.594166   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:28.594228   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:28.631451   79191 cri.go:89] found id: ""
	I0816 00:36:28.631472   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.631480   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:28.631488   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:28.631498   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:28.685335   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:28.685368   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:28.700852   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:28.700877   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:28.773932   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:28.773957   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:28.773972   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:28.848951   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:28.848989   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:31.389208   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:31.403731   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:31.403813   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:31.440979   79191 cri.go:89] found id: ""
	I0816 00:36:31.441010   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.441020   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:31.441028   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:31.441092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:31.476435   79191 cri.go:89] found id: ""
	I0816 00:36:31.476458   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.476465   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:31.476471   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:31.476530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:31.514622   79191 cri.go:89] found id: ""
	I0816 00:36:31.514644   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.514651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:31.514657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:31.514715   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:31.554503   79191 cri.go:89] found id: ""
	I0816 00:36:31.554533   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.554543   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:31.554551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:31.554609   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:31.590283   79191 cri.go:89] found id: ""
	I0816 00:36:31.590317   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.590325   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:31.590332   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:31.590380   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:31.625969   79191 cri.go:89] found id: ""
	I0816 00:36:31.626003   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.626014   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:31.626031   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:31.626102   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:31.660489   79191 cri.go:89] found id: ""
	I0816 00:36:31.660513   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.660520   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:31.660526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:31.660583   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:31.694728   79191 cri.go:89] found id: ""
	I0816 00:36:31.694761   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.694769   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:31.694779   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:31.694790   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:31.760631   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:31.760663   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:31.774858   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:31.774886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:36:28.994228   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.994276   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.926313   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.426045   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.877175   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.378602   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:36:31.851125   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:31.851145   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:31.851156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:31.934491   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:31.934521   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:34.476368   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:34.489252   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:34.489308   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:34.524932   79191 cri.go:89] found id: ""
	I0816 00:36:34.524964   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.524972   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:34.524977   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:34.525032   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:34.559434   79191 cri.go:89] found id: ""
	I0816 00:36:34.559462   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.559473   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:34.559481   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:34.559543   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:34.598700   79191 cri.go:89] found id: ""
	I0816 00:36:34.598728   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.598739   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:34.598747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:34.598808   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:34.632413   79191 cri.go:89] found id: ""
	I0816 00:36:34.632438   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.632448   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:34.632456   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:34.632514   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:34.668385   79191 cri.go:89] found id: ""
	I0816 00:36:34.668409   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.668418   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:34.668425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:34.668486   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:34.703728   79191 cri.go:89] found id: ""
	I0816 00:36:34.703754   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.703764   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:34.703772   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:34.703832   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:34.743119   79191 cri.go:89] found id: ""
	I0816 00:36:34.743152   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.743161   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:34.743171   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:34.743230   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:34.778932   79191 cri.go:89] found id: ""
	I0816 00:36:34.778955   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.778963   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:34.778971   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:34.778987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:34.832050   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:34.832084   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:34.845700   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:34.845728   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:34.917535   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:34.917554   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:34.917565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:35.005262   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:35.005295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:32.994435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:34.994503   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.926422   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.876400   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:38.376351   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.547107   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:37.562035   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:37.562095   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:37.605992   79191 cri.go:89] found id: ""
	I0816 00:36:37.606021   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.606028   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:37.606035   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:37.606092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:37.642613   79191 cri.go:89] found id: ""
	I0816 00:36:37.642642   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.642653   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:37.642660   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:37.642708   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:37.677810   79191 cri.go:89] found id: ""
	I0816 00:36:37.677863   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.677875   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:37.677883   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:37.677939   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:37.714490   79191 cri.go:89] found id: ""
	I0816 00:36:37.714514   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.714522   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:37.714529   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:37.714575   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:37.750807   79191 cri.go:89] found id: ""
	I0816 00:36:37.750837   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.750844   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:37.750850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:37.750912   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:37.790307   79191 cri.go:89] found id: ""
	I0816 00:36:37.790337   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.790347   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:37.790355   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:37.790404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:37.826811   79191 cri.go:89] found id: ""
	I0816 00:36:37.826838   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.826848   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:37.826856   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:37.826920   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:37.862066   79191 cri.go:89] found id: ""
	I0816 00:36:37.862091   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.862101   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:37.862112   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:37.862127   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:37.917127   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:37.917161   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:37.932986   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:37.933024   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:38.008715   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:38.008739   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:38.008754   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:38.088744   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:38.088778   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:40.643426   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:40.659064   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:40.659128   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:40.702486   79191 cri.go:89] found id: ""
	I0816 00:36:40.702513   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.702523   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:40.702530   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:40.702595   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:40.736016   79191 cri.go:89] found id: ""
	I0816 00:36:40.736044   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.736057   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:40.736064   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:40.736125   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:40.779665   79191 cri.go:89] found id: ""
	I0816 00:36:40.779704   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.779724   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:40.779733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:40.779795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:40.818612   79191 cri.go:89] found id: ""
	I0816 00:36:40.818633   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.818640   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:40.818647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:40.818695   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:40.855990   79191 cri.go:89] found id: ""
	I0816 00:36:40.856014   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.856021   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:40.856027   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:40.856074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:40.894792   79191 cri.go:89] found id: ""
	I0816 00:36:40.894827   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.894836   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:40.894845   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:40.894894   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:40.932233   79191 cri.go:89] found id: ""
	I0816 00:36:40.932255   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.932263   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:40.932268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:40.932324   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:40.974601   79191 cri.go:89] found id: ""
	I0816 00:36:40.974624   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.974633   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:40.974642   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:40.974660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:41.049185   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:41.049209   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:41.049223   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:41.129446   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:41.129481   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:41.170312   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:41.170341   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:41.226217   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:41.226254   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:36.995268   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:39.494273   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:41.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.426501   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.926122   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.877227   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.878644   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:43.741485   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:43.756248   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:43.756325   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:43.792440   79191 cri.go:89] found id: ""
	I0816 00:36:43.792469   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.792480   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:43.792488   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:43.792549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:43.829906   79191 cri.go:89] found id: ""
	I0816 00:36:43.829933   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.829941   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:43.829947   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:43.830003   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:43.880305   79191 cri.go:89] found id: ""
	I0816 00:36:43.880330   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.880337   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:43.880343   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:43.880399   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:43.937899   79191 cri.go:89] found id: ""
	I0816 00:36:43.937929   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.937939   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:43.937953   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:43.938023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:43.997578   79191 cri.go:89] found id: ""
	I0816 00:36:43.997603   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.997610   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:43.997620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:43.997672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:44.035606   79191 cri.go:89] found id: ""
	I0816 00:36:44.035629   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.035637   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:44.035643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:44.035692   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:44.072919   79191 cri.go:89] found id: ""
	I0816 00:36:44.072950   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.072961   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:44.072968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:44.073043   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:44.108629   79191 cri.go:89] found id: ""
	I0816 00:36:44.108659   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.108681   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:44.108692   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:44.108705   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:44.149127   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:44.149151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:44.201694   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:44.201737   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:44.217161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:44.217199   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:44.284335   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:44.284362   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:44.284379   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:43.996478   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.494382   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:44.926542   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.926713   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:45.376030   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:47.875418   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.877201   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.869196   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:46.883519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:46.883584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:46.924767   79191 cri.go:89] found id: ""
	I0816 00:36:46.924806   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.924821   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:46.924829   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:46.924889   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:46.963282   79191 cri.go:89] found id: ""
	I0816 00:36:46.963309   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.963320   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:46.963327   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:46.963389   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:47.001421   79191 cri.go:89] found id: ""
	I0816 00:36:47.001450   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.001458   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:47.001463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:47.001518   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:47.037679   79191 cri.go:89] found id: ""
	I0816 00:36:47.037702   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.037713   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:47.037720   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:47.037778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:47.078009   79191 cri.go:89] found id: ""
	I0816 00:36:47.078039   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.078050   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:47.078056   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:47.078113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:47.119032   79191 cri.go:89] found id: ""
	I0816 00:36:47.119056   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.119064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:47.119069   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:47.119127   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:47.154893   79191 cri.go:89] found id: ""
	I0816 00:36:47.154919   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.154925   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:47.154933   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:47.154993   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:47.194544   79191 cri.go:89] found id: ""
	I0816 00:36:47.194571   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.194582   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:47.194592   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:47.194612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:47.267148   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:47.267172   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:47.267186   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:47.345257   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:47.345295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:47.386207   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:47.386233   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:47.436171   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:47.436201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:49.949977   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:49.965702   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:49.965761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:50.002443   79191 cri.go:89] found id: ""
	I0816 00:36:50.002470   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.002481   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:50.002489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:50.002548   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:50.039123   79191 cri.go:89] found id: ""
	I0816 00:36:50.039155   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.039162   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:50.039168   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:50.039220   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:50.074487   79191 cri.go:89] found id: ""
	I0816 00:36:50.074517   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.074527   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:50.074535   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:50.074593   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:50.108980   79191 cri.go:89] found id: ""
	I0816 00:36:50.109008   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.109018   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:50.109025   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:50.109082   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:50.149182   79191 cri.go:89] found id: ""
	I0816 00:36:50.149202   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.149209   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:50.149215   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:50.149261   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:50.183066   79191 cri.go:89] found id: ""
	I0816 00:36:50.183094   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.183102   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:50.183108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:50.183165   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:50.220200   79191 cri.go:89] found id: ""
	I0816 00:36:50.220231   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.220240   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:50.220246   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:50.220302   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:50.258059   79191 cri.go:89] found id: ""
	I0816 00:36:50.258083   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.258092   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:50.258100   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:50.258110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:50.300560   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:50.300591   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:50.350548   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:50.350581   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:50.364792   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:50.364816   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:50.437723   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:50.437746   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:50.437761   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:48.995009   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:50.995542   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.425926   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:51.427896   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.926363   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:52.375826   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:54.876435   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.015846   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:53.029184   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:53.029246   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:53.064306   79191 cri.go:89] found id: ""
	I0816 00:36:53.064338   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.064346   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:53.064352   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:53.064404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:53.104425   79191 cri.go:89] found id: ""
	I0816 00:36:53.104458   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.104468   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:53.104476   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:53.104538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:53.139470   79191 cri.go:89] found id: ""
	I0816 00:36:53.139493   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.139500   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:53.139506   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:53.139551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:53.185195   79191 cri.go:89] found id: ""
	I0816 00:36:53.185225   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.185234   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:53.185242   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:53.185300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:53.221897   79191 cri.go:89] found id: ""
	I0816 00:36:53.221925   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.221935   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:53.221943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:53.222006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:53.258810   79191 cri.go:89] found id: ""
	I0816 00:36:53.258841   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.258852   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:53.258859   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:53.258924   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:53.298672   79191 cri.go:89] found id: ""
	I0816 00:36:53.298701   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.298711   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:53.298719   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:53.298778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:53.333498   79191 cri.go:89] found id: ""
	I0816 00:36:53.333520   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.333527   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:53.333535   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:53.333548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.370495   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:53.370530   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:53.423938   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:53.423982   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:53.438897   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:53.438926   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:53.505951   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:53.505973   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:53.505987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.089638   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:56.103832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:56.103893   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:56.148010   79191 cri.go:89] found id: ""
	I0816 00:36:56.148038   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.148048   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:56.148057   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:56.148120   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:56.185631   79191 cri.go:89] found id: ""
	I0816 00:36:56.185663   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.185673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:56.185680   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:56.185739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:56.222064   79191 cri.go:89] found id: ""
	I0816 00:36:56.222093   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.222104   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:56.222112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:56.222162   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:56.260462   79191 cri.go:89] found id: ""
	I0816 00:36:56.260494   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.260504   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:56.260513   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:56.260574   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:56.296125   79191 cri.go:89] found id: ""
	I0816 00:36:56.296154   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.296164   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:56.296172   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:56.296236   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:56.333278   79191 cri.go:89] found id: ""
	I0816 00:36:56.333305   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.333316   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:56.333324   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:56.333385   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:56.368924   79191 cri.go:89] found id: ""
	I0816 00:36:56.368952   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.368962   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:56.368970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:56.369034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:56.407148   79191 cri.go:89] found id: ""
	I0816 00:36:56.407180   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.407190   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:56.407201   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:56.407215   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:56.464745   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:56.464779   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:56.478177   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:56.478204   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:56.555827   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:56.555851   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:56.555864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.640001   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:56.640040   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.495546   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.994786   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:58.426865   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:57.376484   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.876765   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.181423   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:59.195722   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:59.195804   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:59.232043   79191 cri.go:89] found id: ""
	I0816 00:36:59.232067   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.232075   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:59.232081   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:59.232132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:59.270628   79191 cri.go:89] found id: ""
	I0816 00:36:59.270656   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.270673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:59.270681   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:59.270743   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:59.304054   79191 cri.go:89] found id: ""
	I0816 00:36:59.304089   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.304100   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:59.304108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:59.304169   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:59.339386   79191 cri.go:89] found id: ""
	I0816 00:36:59.339410   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.339417   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:59.339423   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:59.339483   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:59.381313   79191 cri.go:89] found id: ""
	I0816 00:36:59.381361   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.381376   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:59.381385   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:59.381449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:59.417060   79191 cri.go:89] found id: ""
	I0816 00:36:59.417090   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.417101   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:59.417109   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:59.417160   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:59.461034   79191 cri.go:89] found id: ""
	I0816 00:36:59.461060   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.461071   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:59.461078   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:59.461136   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:59.496248   79191 cri.go:89] found id: ""
	I0816 00:36:59.496276   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.496286   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:59.496297   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:59.496312   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:59.566779   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:59.566803   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:59.566829   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:59.651999   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:59.652034   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:59.693286   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:59.693310   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:59.746677   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:59.746711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:58.494370   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.494959   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.927036   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:03.425008   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.376921   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.876676   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.262527   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:02.277903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:02.277965   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:02.323846   79191 cri.go:89] found id: ""
	I0816 00:37:02.323868   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.323876   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:02.323882   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:02.323938   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:02.359552   79191 cri.go:89] found id: ""
	I0816 00:37:02.359578   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.359589   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:02.359596   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:02.359657   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:02.395062   79191 cri.go:89] found id: ""
	I0816 00:37:02.395087   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.395094   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:02.395100   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:02.395155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:02.432612   79191 cri.go:89] found id: ""
	I0816 00:37:02.432636   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.432646   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:02.432654   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:02.432712   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:02.468612   79191 cri.go:89] found id: ""
	I0816 00:37:02.468640   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.468651   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:02.468659   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:02.468716   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:02.514472   79191 cri.go:89] found id: ""
	I0816 00:37:02.514500   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.514511   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:02.514519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:02.514576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:02.551964   79191 cri.go:89] found id: ""
	I0816 00:37:02.551993   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.552003   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:02.552011   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:02.552061   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:02.588018   79191 cri.go:89] found id: ""
	I0816 00:37:02.588044   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.588053   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:02.588063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:02.588081   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:02.638836   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:02.638875   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:02.653581   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:02.653613   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:02.737018   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.737047   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:02.737065   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:02.819726   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:02.819763   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.364943   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:05.379433   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:05.379492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:05.419165   79191 cri.go:89] found id: ""
	I0816 00:37:05.419191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.419198   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:05.419204   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:05.419264   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:05.454417   79191 cri.go:89] found id: ""
	I0816 00:37:05.454438   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.454446   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:05.454452   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:05.454497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:05.490162   79191 cri.go:89] found id: ""
	I0816 00:37:05.490191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.490203   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:05.490210   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:05.490268   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:05.527303   79191 cri.go:89] found id: ""
	I0816 00:37:05.527327   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.527334   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:05.527340   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:05.527393   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:05.562271   79191 cri.go:89] found id: ""
	I0816 00:37:05.562302   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.562310   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:05.562316   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:05.562374   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:05.597800   79191 cri.go:89] found id: ""
	I0816 00:37:05.597823   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.597830   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:05.597837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:05.597905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:05.633996   79191 cri.go:89] found id: ""
	I0816 00:37:05.634021   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.634028   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:05.634034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:05.634088   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:05.672408   79191 cri.go:89] found id: ""
	I0816 00:37:05.672437   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.672446   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:05.672457   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:05.672472   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:05.750956   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:05.750995   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.795573   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:05.795603   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:05.848560   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:05.848593   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:05.862245   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:05.862268   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:05.938704   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.495728   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.994839   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:05.425507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:07.426459   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:06.877664   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.375601   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:08.439692   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:08.452850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:08.452927   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:08.490015   79191 cri.go:89] found id: ""
	I0816 00:37:08.490043   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.490053   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:08.490060   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:08.490121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:08.529631   79191 cri.go:89] found id: ""
	I0816 00:37:08.529665   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.529676   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:08.529689   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:08.529747   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:08.564858   79191 cri.go:89] found id: ""
	I0816 00:37:08.564885   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.564896   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:08.564904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:08.564966   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:08.601144   79191 cri.go:89] found id: ""
	I0816 00:37:08.601180   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.601190   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:08.601200   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:08.601257   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:08.637050   79191 cri.go:89] found id: ""
	I0816 00:37:08.637081   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.637090   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:08.637098   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:08.637158   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:08.670613   79191 cri.go:89] found id: ""
	I0816 00:37:08.670644   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.670655   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:08.670663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:08.670727   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:08.704664   79191 cri.go:89] found id: ""
	I0816 00:37:08.704690   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.704698   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:08.704704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:08.704754   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:08.741307   79191 cri.go:89] found id: ""
	I0816 00:37:08.741337   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.741348   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:08.741360   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:08.741374   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:08.755434   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:08.755459   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:08.828118   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:08.828140   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:08.828151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:08.911565   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:08.911605   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:08.954907   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:08.954937   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.508848   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:11.521998   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:11.522060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:11.558581   79191 cri.go:89] found id: ""
	I0816 00:37:11.558611   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.558622   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:11.558630   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:11.558697   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:11.593798   79191 cri.go:89] found id: ""
	I0816 00:37:11.593822   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.593830   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:11.593836   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:11.593905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:11.629619   79191 cri.go:89] found id: ""
	I0816 00:37:11.629648   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.629658   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:11.629664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:11.629717   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:11.666521   79191 cri.go:89] found id: ""
	I0816 00:37:11.666548   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.666556   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:11.666562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:11.666607   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:11.703374   79191 cri.go:89] found id: ""
	I0816 00:37:11.703406   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.703417   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:11.703427   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:11.703491   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:11.739374   79191 cri.go:89] found id: ""
	I0816 00:37:11.739403   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.739413   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:11.739420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:11.739475   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:11.774981   79191 cri.go:89] found id: ""
	I0816 00:37:11.775006   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.775013   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:11.775019   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:11.775074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:06.995675   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.495024   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:12.428179   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.377241   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.875723   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.809561   79191 cri.go:89] found id: ""
	I0816 00:37:11.809590   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.809601   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:11.809612   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:11.809626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.863071   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:11.863116   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:11.878161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:11.878191   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:11.953572   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.953594   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:11.953608   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:12.035815   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:12.035848   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:14.576547   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:14.590747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:14.590802   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:14.626732   79191 cri.go:89] found id: ""
	I0816 00:37:14.626762   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.626774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:14.626781   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:14.626833   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:14.662954   79191 cri.go:89] found id: ""
	I0816 00:37:14.662978   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.662988   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:14.662996   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:14.663057   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:14.697618   79191 cri.go:89] found id: ""
	I0816 00:37:14.697646   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.697656   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:14.697663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:14.697725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:14.735137   79191 cri.go:89] found id: ""
	I0816 00:37:14.735161   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.735168   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:14.735174   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:14.735222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:14.770625   79191 cri.go:89] found id: ""
	I0816 00:37:14.770648   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.770655   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:14.770660   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:14.770718   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:14.808678   79191 cri.go:89] found id: ""
	I0816 00:37:14.808708   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.808718   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:14.808726   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:14.808795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:14.847321   79191 cri.go:89] found id: ""
	I0816 00:37:14.847349   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.847360   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:14.847368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:14.847425   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:14.886110   79191 cri.go:89] found id: ""
	I0816 00:37:14.886136   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.886147   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:14.886156   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:14.886175   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:14.971978   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:14.972013   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:15.015620   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:15.015644   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:15.067372   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:15.067405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:15.081629   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:15.081652   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:15.151580   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.995551   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.995831   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.495016   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:14.926297   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.926367   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:18.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:15.876514   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.877987   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.652362   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:17.666201   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:17.666278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:17.698723   79191 cri.go:89] found id: ""
	I0816 00:37:17.698760   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.698772   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:17.698778   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:17.698827   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:17.732854   79191 cri.go:89] found id: ""
	I0816 00:37:17.732883   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.732893   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:17.732901   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:17.732957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:17.767665   79191 cri.go:89] found id: ""
	I0816 00:37:17.767691   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.767701   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:17.767709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:17.767769   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:17.801490   79191 cri.go:89] found id: ""
	I0816 00:37:17.801512   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.801520   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:17.801526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:17.801579   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:17.837451   79191 cri.go:89] found id: ""
	I0816 00:37:17.837479   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.837490   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:17.837498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:17.837562   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:17.872898   79191 cri.go:89] found id: ""
	I0816 00:37:17.872924   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.872934   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:17.872943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:17.873002   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:17.910325   79191 cri.go:89] found id: ""
	I0816 00:37:17.910352   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.910362   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:17.910370   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:17.910431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:17.946885   79191 cri.go:89] found id: ""
	I0816 00:37:17.946909   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.946916   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:17.946923   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:17.946935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:18.014011   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:18.014045   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:18.028850   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:18.028886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:18.099362   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:18.099381   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:18.099396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:18.180552   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:18.180588   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:20.720810   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:20.733806   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:20.733887   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:20.771300   79191 cri.go:89] found id: ""
	I0816 00:37:20.771323   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.771330   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:20.771336   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:20.771394   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:20.812327   79191 cri.go:89] found id: ""
	I0816 00:37:20.812355   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.812362   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:20.812369   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:20.812430   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:20.846830   79191 cri.go:89] found id: ""
	I0816 00:37:20.846861   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.846872   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:20.846879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:20.846948   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:20.889979   79191 cri.go:89] found id: ""
	I0816 00:37:20.890005   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.890015   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:20.890023   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:20.890086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:20.933732   79191 cri.go:89] found id: ""
	I0816 00:37:20.933762   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.933772   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:20.933778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:20.933824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:20.972341   79191 cri.go:89] found id: ""
	I0816 00:37:20.972368   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.972376   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:20.972382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:20.972444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:21.011179   79191 cri.go:89] found id: ""
	I0816 00:37:21.011207   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.011216   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:21.011224   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:21.011282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:21.045645   79191 cri.go:89] found id: ""
	I0816 00:37:21.045668   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.045675   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:21.045684   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:21.045694   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:21.099289   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:21.099321   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:21.113814   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:21.113858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:21.186314   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:21.186337   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:21.186355   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:21.271116   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:21.271152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:18.994476   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.996435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:21.425187   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.425456   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.377999   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:22.877014   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.818598   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:23.832330   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:23.832387   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:23.869258   79191 cri.go:89] found id: ""
	I0816 00:37:23.869279   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.869286   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:23.869293   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:23.869342   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:23.903958   79191 cri.go:89] found id: ""
	I0816 00:37:23.903989   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.903999   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:23.904006   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:23.904060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:23.943110   79191 cri.go:89] found id: ""
	I0816 00:37:23.943142   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.943153   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:23.943160   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:23.943222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:23.979325   79191 cri.go:89] found id: ""
	I0816 00:37:23.979356   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.979366   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:23.979374   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:23.979435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:24.017570   79191 cri.go:89] found id: ""
	I0816 00:37:24.017597   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.017607   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:24.017614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:24.017684   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:24.051522   79191 cri.go:89] found id: ""
	I0816 00:37:24.051546   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.051555   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:24.051562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:24.051626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:24.087536   79191 cri.go:89] found id: ""
	I0816 00:37:24.087561   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.087572   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:24.087579   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:24.087644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:24.123203   79191 cri.go:89] found id: ""
	I0816 00:37:24.123233   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.123245   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:24.123256   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:24.123276   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:24.178185   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:24.178225   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:24.192895   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:24.192920   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:24.273471   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:24.273492   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:24.273504   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:24.357890   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:24.357936   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:23.495269   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.994859   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.427328   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.927068   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.376932   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.377168   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:29.876182   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:26.950399   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:26.964347   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:26.964406   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:27.004694   79191 cri.go:89] found id: ""
	I0816 00:37:27.004722   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.004738   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:27.004745   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:27.004800   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:27.040051   79191 cri.go:89] found id: ""
	I0816 00:37:27.040080   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.040090   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:27.040096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:27.040144   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:27.088614   79191 cri.go:89] found id: ""
	I0816 00:37:27.088642   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.088651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:27.088657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:27.088732   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:27.125427   79191 cri.go:89] found id: ""
	I0816 00:37:27.125450   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.125457   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:27.125464   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:27.125511   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:27.158562   79191 cri.go:89] found id: ""
	I0816 00:37:27.158592   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.158602   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:27.158609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:27.158672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:27.192986   79191 cri.go:89] found id: ""
	I0816 00:37:27.193015   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.193026   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:27.193034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:27.193091   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:27.228786   79191 cri.go:89] found id: ""
	I0816 00:37:27.228828   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.228847   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:27.228858   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:27.228921   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:27.262776   79191 cri.go:89] found id: ""
	I0816 00:37:27.262808   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.262819   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:27.262829   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:27.262844   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:27.276444   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:27.276470   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:27.349918   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:27.349946   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:27.349958   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:27.435030   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:27.435061   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:27.484043   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:27.484069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.038376   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:30.051467   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:30.051530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:30.086346   79191 cri.go:89] found id: ""
	I0816 00:37:30.086376   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.086386   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:30.086394   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:30.086454   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:30.127665   79191 cri.go:89] found id: ""
	I0816 00:37:30.127691   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.127699   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:30.127704   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:30.127757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:30.169901   79191 cri.go:89] found id: ""
	I0816 00:37:30.169929   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.169939   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:30.169950   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:30.170013   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:30.212501   79191 cri.go:89] found id: ""
	I0816 00:37:30.212523   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.212530   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:30.212537   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:30.212584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:30.256560   79191 cri.go:89] found id: ""
	I0816 00:37:30.256583   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.256591   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:30.256597   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:30.256646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:30.291062   79191 cri.go:89] found id: ""
	I0816 00:37:30.291086   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.291093   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:30.291099   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:30.291143   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:30.328325   79191 cri.go:89] found id: ""
	I0816 00:37:30.328353   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.328361   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:30.328368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:30.328415   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:30.364946   79191 cri.go:89] found id: ""
	I0816 00:37:30.364972   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.364981   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:30.364991   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:30.365005   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:30.408090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:30.408117   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.463421   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:30.463456   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:30.479679   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:30.479711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:30.555394   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:30.555416   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:30.555432   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:28.494477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.494598   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.427146   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:32.926282   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:31.877446   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.376145   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:33.137366   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:33.150970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:33.151030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:33.191020   79191 cri.go:89] found id: ""
	I0816 00:37:33.191047   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.191055   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:33.191061   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:33.191112   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:33.227971   79191 cri.go:89] found id: ""
	I0816 00:37:33.228022   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.228030   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:33.228038   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:33.228089   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:33.265036   79191 cri.go:89] found id: ""
	I0816 00:37:33.265065   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.265074   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:33.265079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:33.265126   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:33.300385   79191 cri.go:89] found id: ""
	I0816 00:37:33.300411   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.300418   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:33.300425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:33.300487   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:33.335727   79191 cri.go:89] found id: ""
	I0816 00:37:33.335757   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.335768   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:33.335776   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:33.335839   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:33.373458   79191 cri.go:89] found id: ""
	I0816 00:37:33.373489   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.373500   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:33.373507   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:33.373568   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:33.410380   79191 cri.go:89] found id: ""
	I0816 00:37:33.410404   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.410413   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:33.410420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:33.410480   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:33.451007   79191 cri.go:89] found id: ""
	I0816 00:37:33.451030   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.451040   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:33.451049   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:33.451062   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:33.502215   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:33.502249   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:33.516123   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:33.516152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:33.590898   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:33.590921   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:33.590944   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:33.668404   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:33.668455   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:36.209671   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:36.223498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:36.223561   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:36.258980   79191 cri.go:89] found id: ""
	I0816 00:37:36.259041   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.259056   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:36.259064   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:36.259123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:36.293659   79191 cri.go:89] found id: ""
	I0816 00:37:36.293687   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.293694   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:36.293703   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:36.293761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:36.331729   79191 cri.go:89] found id: ""
	I0816 00:37:36.331756   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.331766   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:36.331773   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:36.331830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:36.368441   79191 cri.go:89] found id: ""
	I0816 00:37:36.368470   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.368479   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:36.368486   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:36.368533   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:36.405338   79191 cri.go:89] found id: ""
	I0816 00:37:36.405368   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.405380   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:36.405389   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:36.405448   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:36.441986   79191 cri.go:89] found id: ""
	I0816 00:37:36.442018   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.442029   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:36.442038   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:36.442097   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:36.478102   79191 cri.go:89] found id: ""
	I0816 00:37:36.478183   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.478197   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:36.478206   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:36.478269   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:36.517138   79191 cri.go:89] found id: ""
	I0816 00:37:36.517167   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.517178   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:36.517190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:36.517205   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:36.570009   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:36.570042   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:36.583534   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:36.583565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:36.651765   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:36.651794   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:36.651808   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:36.732836   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:36.732870   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:32.495090   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.996253   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.926615   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:37.425790   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:36.377305   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:38.876443   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.274490   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:39.288528   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:39.288591   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:39.325560   79191 cri.go:89] found id: ""
	I0816 00:37:39.325582   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.325589   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:39.325599   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:39.325656   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:39.365795   79191 cri.go:89] found id: ""
	I0816 00:37:39.365822   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.365829   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:39.365837   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:39.365906   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:39.404933   79191 cri.go:89] found id: ""
	I0816 00:37:39.404961   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.404971   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:39.404977   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:39.405041   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:39.442712   79191 cri.go:89] found id: ""
	I0816 00:37:39.442736   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.442747   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:39.442754   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:39.442814   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:39.484533   79191 cri.go:89] found id: ""
	I0816 00:37:39.484557   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.484566   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:39.484573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:39.484636   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:39.522089   79191 cri.go:89] found id: ""
	I0816 00:37:39.522115   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.522125   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:39.522133   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:39.522194   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:39.557099   79191 cri.go:89] found id: ""
	I0816 00:37:39.557128   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.557138   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:39.557145   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:39.557205   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:39.594809   79191 cri.go:89] found id: ""
	I0816 00:37:39.594838   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.594849   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:39.594859   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:39.594874   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:39.611079   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:39.611110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:39.683156   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:39.683182   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:39.683198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:39.761198   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:39.761235   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:39.800972   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:39.801003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:37.494553   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.495854   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.427910   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.926445   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.376128   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:43.377791   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:42.354816   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:42.368610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:42.368673   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:42.404716   79191 cri.go:89] found id: ""
	I0816 00:37:42.404738   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.404745   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:42.404753   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:42.404798   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:42.441619   79191 cri.go:89] found id: ""
	I0816 00:37:42.441649   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.441660   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:42.441667   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:42.441726   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:42.480928   79191 cri.go:89] found id: ""
	I0816 00:37:42.480965   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.480976   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:42.480983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:42.481051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:42.519187   79191 cri.go:89] found id: ""
	I0816 00:37:42.519216   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.519226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:42.519234   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:42.519292   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:42.554928   79191 cri.go:89] found id: ""
	I0816 00:37:42.554956   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.554967   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:42.554974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:42.555035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:42.593436   79191 cri.go:89] found id: ""
	I0816 00:37:42.593472   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.593481   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:42.593487   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:42.593545   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:42.628078   79191 cri.go:89] found id: ""
	I0816 00:37:42.628101   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.628108   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:42.628113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:42.628172   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:42.662824   79191 cri.go:89] found id: ""
	I0816 00:37:42.662852   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.662862   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:42.662871   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:42.662888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:42.677267   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:42.677290   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:42.749570   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:42.749599   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:42.749615   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:42.831177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:42.831213   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:42.871928   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:42.871957   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.430704   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:45.444400   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:45.444461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:45.479503   79191 cri.go:89] found id: ""
	I0816 00:37:45.479529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.479537   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:45.479543   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:45.479596   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:45.518877   79191 cri.go:89] found id: ""
	I0816 00:37:45.518907   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.518917   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:45.518925   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:45.518992   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:45.553936   79191 cri.go:89] found id: ""
	I0816 00:37:45.553966   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.553977   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:45.553984   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:45.554035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:45.593054   79191 cri.go:89] found id: ""
	I0816 00:37:45.593081   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.593088   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:45.593095   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:45.593147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:45.631503   79191 cri.go:89] found id: ""
	I0816 00:37:45.631529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.631537   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:45.631543   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:45.631599   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:45.667435   79191 cri.go:89] found id: ""
	I0816 00:37:45.667459   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.667466   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:45.667473   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:45.667529   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:45.702140   79191 cri.go:89] found id: ""
	I0816 00:37:45.702168   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.702179   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:45.702187   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:45.702250   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:45.736015   79191 cri.go:89] found id: ""
	I0816 00:37:45.736048   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.736059   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:45.736070   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:45.736085   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:45.817392   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:45.817427   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:45.856421   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:45.856451   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.912429   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:45.912476   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:45.928411   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:45.928435   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:46.001141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:41.995835   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.497033   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.426414   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:46.927720   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:45.876721   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:47.877185   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.877396   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.501317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:48.515114   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:48.515190   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:48.553776   79191 cri.go:89] found id: ""
	I0816 00:37:48.553802   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.553810   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:48.553816   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:48.553890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:48.589760   79191 cri.go:89] found id: ""
	I0816 00:37:48.589786   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.589794   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:48.589800   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:48.589871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:48.629792   79191 cri.go:89] found id: ""
	I0816 00:37:48.629816   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.629825   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:48.629833   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:48.629898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:48.668824   79191 cri.go:89] found id: ""
	I0816 00:37:48.668852   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.668860   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:48.668866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:48.668930   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:48.704584   79191 cri.go:89] found id: ""
	I0816 00:37:48.704615   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.704626   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:48.704634   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:48.704691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:48.738833   79191 cri.go:89] found id: ""
	I0816 00:37:48.738855   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.738863   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:48.738868   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:48.738928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:48.774943   79191 cri.go:89] found id: ""
	I0816 00:37:48.774972   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.774981   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:48.774989   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:48.775051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:48.808802   79191 cri.go:89] found id: ""
	I0816 00:37:48.808825   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.808832   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:48.808841   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:48.808856   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:48.858849   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:48.858880   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:48.873338   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:48.873369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:48.950172   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:48.950195   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:48.950209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:49.038642   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:49.038679   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:51.581947   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:51.596612   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:51.596691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:51.631468   79191 cri.go:89] found id: ""
	I0816 00:37:51.631498   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.631509   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:51.631517   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:51.631577   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:51.666922   79191 cri.go:89] found id: ""
	I0816 00:37:51.666953   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.666963   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:51.666971   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:51.667034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:51.707081   79191 cri.go:89] found id: ""
	I0816 00:37:51.707109   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.707116   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:51.707122   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:51.707189   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:51.743884   79191 cri.go:89] found id: ""
	I0816 00:37:51.743912   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.743925   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:51.743932   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:51.743990   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:51.779565   79191 cri.go:89] found id: ""
	I0816 00:37:51.779595   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.779603   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:51.779610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:51.779658   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:46.994211   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.995446   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.495519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.426703   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.426947   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:53.427050   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:52.377050   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.877759   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.818800   79191 cri.go:89] found id: ""
	I0816 00:37:51.818824   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.818831   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:51.818837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:51.818899   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:51.855343   79191 cri.go:89] found id: ""
	I0816 00:37:51.855367   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.855374   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:51.855380   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:51.855426   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:51.890463   79191 cri.go:89] found id: ""
	I0816 00:37:51.890496   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.890505   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:51.890513   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:51.890526   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:51.977168   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:51.977209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:52.021626   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:52.021660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:52.076983   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:52.077027   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:52.092111   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:52.092142   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:52.172738   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:54.673192   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:54.688780   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.688853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.725279   79191 cri.go:89] found id: ""
	I0816 00:37:54.725308   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.725318   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:54.725325   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.725383   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:54.764326   79191 cri.go:89] found id: ""
	I0816 00:37:54.764353   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.764364   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:54.764372   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:54.764423   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:54.805221   79191 cri.go:89] found id: ""
	I0816 00:37:54.805252   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.805263   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:54.805270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:54.805334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:54.849724   79191 cri.go:89] found id: ""
	I0816 00:37:54.849750   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.849759   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:54.849765   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:54.849824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:54.894438   79191 cri.go:89] found id: ""
	I0816 00:37:54.894460   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.894468   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:54.894475   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:54.894532   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:54.933400   79191 cri.go:89] found id: ""
	I0816 00:37:54.933422   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.933431   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:54.933439   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:54.933497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:54.982249   79191 cri.go:89] found id: ""
	I0816 00:37:54.982277   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.982286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:54.982294   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:54.982353   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:55.024431   79191 cri.go:89] found id: ""
	I0816 00:37:55.024458   79191 logs.go:276] 0 containers: []
	W0816 00:37:55.024469   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:55.024479   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.024499   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:55.107089   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:55.107119   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:55.148949   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.148981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.202865   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.202902   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.218528   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.218556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:55.304995   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:53.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:55.995483   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.926671   78713 pod_ready.go:82] duration metric: took 4m0.007058537s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:37:54.926700   78713 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:37:54.926711   78713 pod_ready.go:39] duration metric: took 4m7.919515966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:37:54.926728   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:37:54.926764   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.926821   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.983024   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:54.983043   78713 cri.go:89] found id: ""
	I0816 00:37:54.983052   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:54.983103   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:54.988579   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.988644   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:55.035200   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.035231   78713 cri.go:89] found id: ""
	I0816 00:37:55.035241   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:55.035291   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.040701   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:55.040777   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:55.087306   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.087330   78713 cri.go:89] found id: ""
	I0816 00:37:55.087340   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:55.087422   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.092492   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:55.092560   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:55.144398   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.144424   78713 cri.go:89] found id: ""
	I0816 00:37:55.144433   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:55.144494   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.149882   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:55.149953   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:55.193442   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.193464   78713 cri.go:89] found id: ""
	I0816 00:37:55.193472   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:55.193528   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.198812   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:55.198886   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:55.238634   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.238656   78713 cri.go:89] found id: ""
	I0816 00:37:55.238666   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:55.238729   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.243141   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:55.243229   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:55.281414   78713 cri.go:89] found id: ""
	I0816 00:37:55.281439   78713 logs.go:276] 0 containers: []
	W0816 00:37:55.281449   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:55.281457   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:55.281519   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:55.319336   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.319357   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.319363   78713 cri.go:89] found id: ""
	I0816 00:37:55.319371   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:55.319431   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.323837   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.328777   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:55.328801   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.376259   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:55.376290   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.419553   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:55.419584   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.476026   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.476058   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.544263   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.544297   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.561818   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.561858   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:55.701342   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:55.701375   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:55.746935   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:55.746968   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.787200   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:55.787234   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.825257   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:55.825282   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.865569   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:55.865594   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.905234   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.905269   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:56.391175   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:56.391208   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:58.943163   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:58.961551   78713 api_server.go:72] duration metric: took 4m17.689832084s to wait for apiserver process to appear ...
	I0816 00:37:58.961592   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:37:58.961630   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:58.961697   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:59.001773   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.001794   78713 cri.go:89] found id: ""
	I0816 00:37:59.001803   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:59.001876   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.006168   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:59.006222   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:59.041625   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.041647   78713 cri.go:89] found id: ""
	I0816 00:37:59.041654   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:59.041715   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.046258   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:59.046323   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:59.086070   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.086089   78713 cri.go:89] found id: ""
	I0816 00:37:59.086097   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:59.086151   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.090556   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:59.090626   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:59.129889   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.129931   78713 cri.go:89] found id: ""
	I0816 00:37:59.129942   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:59.130008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.135694   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:59.135775   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.375656   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.375979   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:57.805335   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:57.819904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:57.819989   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:57.856119   79191 cri.go:89] found id: ""
	I0816 00:37:57.856146   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.856153   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:57.856160   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:57.856217   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:57.892797   79191 cri.go:89] found id: ""
	I0816 00:37:57.892825   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.892833   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:57.892841   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:57.892905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:57.928753   79191 cri.go:89] found id: ""
	I0816 00:37:57.928784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.928795   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:57.928803   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:57.928884   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:57.963432   79191 cri.go:89] found id: ""
	I0816 00:37:57.963462   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.963474   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:57.963481   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:57.963538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.998759   79191 cri.go:89] found id: ""
	I0816 00:37:57.998784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.998793   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:57.998801   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:57.998886   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:58.035262   79191 cri.go:89] found id: ""
	I0816 00:37:58.035288   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.035296   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:58.035303   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:58.035358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:58.071052   79191 cri.go:89] found id: ""
	I0816 00:37:58.071079   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.071087   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:58.071092   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:58.071150   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:58.110047   79191 cri.go:89] found id: ""
	I0816 00:37:58.110074   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.110083   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:58.110090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:58.110101   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:58.164792   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:58.164823   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:58.178742   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:58.178770   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:58.251861   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:58.251899   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:58.251921   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:58.329805   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:58.329859   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:00.872911   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:00.887914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:00.887986   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:00.925562   79191 cri.go:89] found id: ""
	I0816 00:38:00.925595   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.925606   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:00.925615   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:00.925669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:00.961476   79191 cri.go:89] found id: ""
	I0816 00:38:00.961498   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.961505   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:00.961510   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:00.961554   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:00.997575   79191 cri.go:89] found id: ""
	I0816 00:38:00.997599   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.997608   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:00.997616   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:00.997677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:01.035130   79191 cri.go:89] found id: ""
	I0816 00:38:01.035158   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.035169   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:01.035177   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:01.035232   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:01.073768   79191 cri.go:89] found id: ""
	I0816 00:38:01.073800   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.073811   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:01.073819   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:01.073898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:01.107904   79191 cri.go:89] found id: ""
	I0816 00:38:01.107928   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.107937   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:01.107943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:01.108004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:01.142654   79191 cri.go:89] found id: ""
	I0816 00:38:01.142690   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.142701   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:01.142709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:01.142766   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:01.187565   79191 cri.go:89] found id: ""
	I0816 00:38:01.187599   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.187610   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:01.187621   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:01.187635   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:01.265462   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:01.265493   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:01.265508   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:01.346988   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:01.347020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:01.390977   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:01.391006   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.443858   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:01.443892   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:57.996188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:00.495210   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.176702   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.176728   78713 cri.go:89] found id: ""
	I0816 00:37:59.176738   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:59.176799   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.182305   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:59.182387   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:59.223938   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.223960   78713 cri.go:89] found id: ""
	I0816 00:37:59.223968   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:59.224023   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.228818   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:59.228884   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:59.264566   78713 cri.go:89] found id: ""
	I0816 00:37:59.264589   78713 logs.go:276] 0 containers: []
	W0816 00:37:59.264597   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:59.264606   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:59.264654   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:59.302534   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.302560   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.302565   78713 cri.go:89] found id: ""
	I0816 00:37:59.302574   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:59.302621   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.307021   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.311258   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:59.311299   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:59.425542   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:59.425574   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.466078   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:59.466107   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:59.480894   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:59.480925   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.524790   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:59.524822   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.568832   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:59.568862   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.619399   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:59.619433   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.658616   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:59.658645   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.720421   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:59.720469   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.756558   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:59.756586   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.798650   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:59.798674   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:59.864280   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:59.864323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:59.913086   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:59.913118   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:02.828194   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:38:02.832896   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:38:02.834035   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:02.834059   78713 api_server.go:131] duration metric: took 3.87246001s to wait for apiserver health ...
	I0816 00:38:02.834067   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:02.834089   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:02.834145   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:02.873489   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:02.873512   78713 cri.go:89] found id: ""
	I0816 00:38:02.873521   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:38:02.873577   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.878807   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:02.878883   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:02.919930   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:02.919949   78713 cri.go:89] found id: ""
	I0816 00:38:02.919957   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:38:02.920008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.924459   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:02.924525   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:02.964609   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:02.964636   78713 cri.go:89] found id: ""
	I0816 00:38:02.964644   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:38:02.964697   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.968808   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:02.968921   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:03.017177   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.017201   78713 cri.go:89] found id: ""
	I0816 00:38:03.017210   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:38:03.017275   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.021905   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:03.021992   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:03.061720   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.061741   78713 cri.go:89] found id: ""
	I0816 00:38:03.061748   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:38:03.061801   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.066149   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:03.066206   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:03.107130   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.107149   78713 cri.go:89] found id: ""
	I0816 00:38:03.107156   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:38:03.107213   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.111323   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:03.111372   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:03.149906   78713 cri.go:89] found id: ""
	I0816 00:38:03.149927   78713 logs.go:276] 0 containers: []
	W0816 00:38:03.149934   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:03.149940   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:03.150000   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:03.190981   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.191007   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.191011   78713 cri.go:89] found id: ""
	I0816 00:38:03.191018   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:38:03.191066   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.195733   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.199755   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:03.199775   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:03.302209   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:38:03.302239   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:03.352505   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:38:03.352548   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.392296   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:38:03.392323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.448092   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:38:03.448130   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.487516   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:38:03.487541   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:03.541954   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:03.541989   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:03.557026   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:38:03.557049   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:03.602639   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:38:03.602670   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:03.642706   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:38:03.642733   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.683504   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:38:03.683530   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.721802   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:03.721826   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.089579   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.089621   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.376613   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:03.376837   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:06.679744   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:06.679797   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.679805   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.679812   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.679819   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.679825   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.679849   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.679861   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.679869   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.679878   78713 system_pods.go:74] duration metric: took 3.845804999s to wait for pod list to return data ...
	I0816 00:38:06.679886   78713 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:06.682521   78713 default_sa.go:45] found service account: "default"
	I0816 00:38:06.682553   78713 default_sa.go:55] duration metric: took 2.660224ms for default service account to be created ...
	I0816 00:38:06.682565   78713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:06.688149   78713 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:06.688178   78713 system_pods.go:89] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.688183   78713 system_pods.go:89] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.688187   78713 system_pods.go:89] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.688192   78713 system_pods.go:89] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.688196   78713 system_pods.go:89] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.688199   78713 system_pods.go:89] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.688206   78713 system_pods.go:89] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.688213   78713 system_pods.go:89] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.688220   78713 system_pods.go:126] duration metric: took 5.649758ms to wait for k8s-apps to be running ...
	I0816 00:38:06.688226   78713 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:06.688268   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:06.706263   78713 system_svc.go:56] duration metric: took 18.025675ms WaitForService to wait for kubelet
	I0816 00:38:06.706301   78713 kubeadm.go:582] duration metric: took 4m25.434584326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:06.706337   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:06.709536   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:06.709553   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:06.709565   78713 node_conditions.go:105] duration metric: took 3.213145ms to run NodePressure ...
	I0816 00:38:06.709576   78713 start.go:241] waiting for startup goroutines ...
	I0816 00:38:06.709582   78713 start.go:246] waiting for cluster config update ...
	I0816 00:38:06.709593   78713 start.go:255] writing updated cluster config ...
	I0816 00:38:06.709864   78713 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:06.755974   78713 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:06.757917   78713 out.go:177] * Done! kubectl is now configured to use "embed-certs-758469" cluster and "default" namespace by default
	I0816 00:38:03.959040   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:03.973674   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:03.973758   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:04.013606   79191 cri.go:89] found id: ""
	I0816 00:38:04.013653   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.013661   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:04.013667   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:04.013737   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:04.054558   79191 cri.go:89] found id: ""
	I0816 00:38:04.054590   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.054602   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:04.054609   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:04.054667   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:04.097116   79191 cri.go:89] found id: ""
	I0816 00:38:04.097143   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.097154   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:04.097162   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:04.097223   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:04.136770   79191 cri.go:89] found id: ""
	I0816 00:38:04.136798   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.136809   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:04.136816   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:04.136865   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:04.171906   79191 cri.go:89] found id: ""
	I0816 00:38:04.171929   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.171937   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:04.171943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:04.172004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:04.208694   79191 cri.go:89] found id: ""
	I0816 00:38:04.208725   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.208735   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:04.208744   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:04.208803   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:04.276713   79191 cri.go:89] found id: ""
	I0816 00:38:04.276744   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.276755   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:04.276763   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:04.276823   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:04.316646   79191 cri.go:89] found id: ""
	I0816 00:38:04.316669   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.316696   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:04.316707   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:04.316722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:04.329819   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:04.329864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:04.399032   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:04.399052   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:04.399080   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.487665   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:04.487698   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:04.530937   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.530962   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:02.496317   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:04.496477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:05.878535   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:08.377096   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:07.087584   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:07.102015   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:07.102086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:07.139530   79191 cri.go:89] found id: ""
	I0816 00:38:07.139559   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.139569   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:07.139577   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:07.139642   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:07.179630   79191 cri.go:89] found id: ""
	I0816 00:38:07.179659   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.179669   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:07.179675   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:07.179734   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:07.216407   79191 cri.go:89] found id: ""
	I0816 00:38:07.216435   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.216444   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:07.216449   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:07.216509   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:07.252511   79191 cri.go:89] found id: ""
	I0816 00:38:07.252536   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.252544   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:07.252551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:07.252613   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:07.288651   79191 cri.go:89] found id: ""
	I0816 00:38:07.288679   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.288689   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:07.288698   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:07.288757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:07.325910   79191 cri.go:89] found id: ""
	I0816 00:38:07.325963   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.325974   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:07.325982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:07.326046   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:07.362202   79191 cri.go:89] found id: ""
	I0816 00:38:07.362230   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.362244   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:07.362251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:07.362316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:07.405272   79191 cri.go:89] found id: ""
	I0816 00:38:07.405302   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.405313   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:07.405324   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:07.405339   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:07.461186   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:07.461222   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:07.475503   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:07.475544   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:07.555146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:07.555165   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:07.555179   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:07.635162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:07.635201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.174600   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:10.190418   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:10.190479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:10.251925   79191 cri.go:89] found id: ""
	I0816 00:38:10.251960   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.251969   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:10.251974   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:10.252027   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:10.289038   79191 cri.go:89] found id: ""
	I0816 00:38:10.289078   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.289088   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:10.289096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:10.289153   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:10.334562   79191 cri.go:89] found id: ""
	I0816 00:38:10.334591   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.334601   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:10.334609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:10.334669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:10.371971   79191 cri.go:89] found id: ""
	I0816 00:38:10.372000   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.372010   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:10.372018   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:10.372084   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:10.409654   79191 cri.go:89] found id: ""
	I0816 00:38:10.409685   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.409696   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:10.409703   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:10.409770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:10.446639   79191 cri.go:89] found id: ""
	I0816 00:38:10.446666   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.446675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:10.446683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:10.446750   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:10.483601   79191 cri.go:89] found id: ""
	I0816 00:38:10.483629   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.483641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:10.483648   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:10.483707   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:10.519640   79191 cri.go:89] found id: ""
	I0816 00:38:10.519670   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.519679   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:10.519690   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:10.519704   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:10.603281   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:10.603300   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:10.603311   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:10.689162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:10.689198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.730701   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:10.730724   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:10.780411   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:10.780441   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:06.997726   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:09.495539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.495753   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:10.876242   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.376332   78747 pod_ready.go:82] duration metric: took 4m0.006460655s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:38:11.376362   78747 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:38:11.376372   78747 pod_ready.go:39] duration metric: took 4m3.906659924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:38:11.376389   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:38:11.376416   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:11.376472   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:11.425716   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:11.425741   78747 cri.go:89] found id: ""
	I0816 00:38:11.425749   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:11.425804   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.431122   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:11.431195   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:11.468622   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:11.468647   78747 cri.go:89] found id: ""
	I0816 00:38:11.468657   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:11.468713   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.474270   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:11.474329   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:11.518448   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:11.518493   78747 cri.go:89] found id: ""
	I0816 00:38:11.518502   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:11.518569   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.524185   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:11.524242   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:11.561343   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:11.561367   78747 cri.go:89] found id: ""
	I0816 00:38:11.561374   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:11.561418   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.565918   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:11.565992   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:11.606010   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.606036   78747 cri.go:89] found id: ""
	I0816 00:38:11.606043   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:11.606097   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.610096   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:11.610166   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:11.646204   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:11.646229   78747 cri.go:89] found id: ""
	I0816 00:38:11.646238   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:11.646295   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.650405   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:11.650467   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:11.690407   78747 cri.go:89] found id: ""
	I0816 00:38:11.690436   78747 logs.go:276] 0 containers: []
	W0816 00:38:11.690446   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:11.690454   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:11.690510   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:11.736695   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:11.736722   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:11.736729   78747 cri.go:89] found id: ""
	I0816 00:38:11.736738   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:11.736803   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.741022   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.744983   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:11.745011   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.791452   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:11.791484   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:12.304425   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:12.304470   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:12.341318   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:12.341353   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:12.401425   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:12.401464   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:12.476598   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:12.476653   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:12.495594   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:12.495629   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:12.645961   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:12.645991   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:12.697058   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:12.697091   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:12.749085   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:12.749117   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:12.795786   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:12.795831   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:12.835928   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:12.835959   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:12.872495   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:12.872524   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:13.294689   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:13.308762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:13.308822   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:13.345973   79191 cri.go:89] found id: ""
	I0816 00:38:13.346004   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.346015   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:13.346022   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:13.346083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:13.382905   79191 cri.go:89] found id: ""
	I0816 00:38:13.382934   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.382945   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:13.382952   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:13.383001   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:13.417616   79191 cri.go:89] found id: ""
	I0816 00:38:13.417650   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.417662   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:13.417669   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:13.417739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:13.453314   79191 cri.go:89] found id: ""
	I0816 00:38:13.453350   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.453360   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:13.453368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:13.453435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:13.488507   79191 cri.go:89] found id: ""
	I0816 00:38:13.488536   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.488547   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:13.488555   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:13.488614   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:13.527064   79191 cri.go:89] found id: ""
	I0816 00:38:13.527095   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.527108   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:13.527116   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:13.527178   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:13.562838   79191 cri.go:89] found id: ""
	I0816 00:38:13.562867   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.562876   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:13.562882   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:13.562944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:13.598924   79191 cri.go:89] found id: ""
	I0816 00:38:13.598963   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.598974   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:13.598985   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:13.598999   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:13.651122   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:13.651156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:13.665255   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:13.665281   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:13.742117   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:13.742135   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:13.742148   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:13.824685   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:13.824719   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.366542   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:16.380855   79191 kubeadm.go:597] duration metric: took 4m3.665876253s to restartPrimaryControlPlane
	W0816 00:38:16.380919   79191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:38:16.380946   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:38:13.496702   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.996304   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.421355   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:15.437651   78747 api_server.go:72] duration metric: took 4m15.224557183s to wait for apiserver process to appear ...
	I0816 00:38:15.437677   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:38:15.437721   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:15.437782   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:15.473240   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:15.473265   78747 cri.go:89] found id: ""
	I0816 00:38:15.473273   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:15.473335   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.477666   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:15.477734   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:15.526073   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:15.526095   78747 cri.go:89] found id: ""
	I0816 00:38:15.526104   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:15.526165   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.530706   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:15.530775   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:15.571124   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:15.571149   78747 cri.go:89] found id: ""
	I0816 00:38:15.571159   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:15.571217   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.578613   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:15.578690   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:15.617432   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:15.617454   78747 cri.go:89] found id: ""
	I0816 00:38:15.617464   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:15.617529   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.621818   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:15.621899   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:15.658963   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:15.658981   78747 cri.go:89] found id: ""
	I0816 00:38:15.658988   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:15.659037   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.663170   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:15.663230   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:15.699297   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.699322   78747 cri.go:89] found id: ""
	I0816 00:38:15.699331   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:15.699388   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.704029   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:15.704085   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:15.742790   78747 cri.go:89] found id: ""
	I0816 00:38:15.742816   78747 logs.go:276] 0 containers: []
	W0816 00:38:15.742825   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:15.742830   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:15.742875   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:15.776898   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:15.776918   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:15.776922   78747 cri.go:89] found id: ""
	I0816 00:38:15.776945   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:15.777007   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.781511   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.785953   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:15.785981   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.840461   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:15.840498   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:16.320285   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:16.320323   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.362171   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:16.362200   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:16.444803   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:16.444834   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:16.461705   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:16.461732   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:16.576190   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:16.576220   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:16.626407   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:16.626449   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:16.673004   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:16.673036   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:16.724770   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:16.724797   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:16.764812   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:16.764838   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:16.804268   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:16.804300   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:16.841197   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:16.841221   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.380352   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:38:19.386760   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:38:19.387751   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:19.387773   78747 api_server.go:131] duration metric: took 3.950088801s to wait for apiserver health ...
	I0816 00:38:19.387781   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:19.387801   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:19.387843   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:19.429928   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:19.429952   78747 cri.go:89] found id: ""
	I0816 00:38:19.429961   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:19.430021   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.434822   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:19.434870   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:19.476789   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:19.476811   78747 cri.go:89] found id: ""
	I0816 00:38:19.476819   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:19.476869   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.481574   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:19.481640   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:19.528718   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:19.528742   78747 cri.go:89] found id: ""
	I0816 00:38:19.528750   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:19.528799   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.533391   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:19.533455   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:19.581356   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:19.581374   78747 cri.go:89] found id: ""
	I0816 00:38:19.581381   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:19.581427   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.585915   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:19.585977   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:19.623514   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:19.623544   78747 cri.go:89] found id: ""
	I0816 00:38:19.623552   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:19.623606   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.627652   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:19.627711   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:19.663933   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:19.663957   78747 cri.go:89] found id: ""
	I0816 00:38:19.663967   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:19.664032   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.668093   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:19.668162   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:19.707688   78747 cri.go:89] found id: ""
	I0816 00:38:19.707716   78747 logs.go:276] 0 containers: []
	W0816 00:38:19.707726   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:19.707741   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:19.707804   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:19.745900   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:19.745930   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.745935   78747 cri.go:89] found id: ""
	I0816 00:38:19.745944   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:19.746002   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.750934   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.755022   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:19.755044   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:19.807228   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:19.807257   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:19.918242   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:19.918274   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:21.772367   79191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.39139467s)
	I0816 00:38:21.772449   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:18.495150   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:20.995073   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:19.969165   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:19.969198   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:20.008945   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:20.008975   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:20.050080   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:20.050120   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:20.450059   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:20.450107   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:20.490694   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:20.490721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:20.532856   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:20.532890   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:20.609130   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:20.609178   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:20.624248   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:20.624279   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:20.675636   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:20.675669   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:20.716694   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:20.716721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:23.289748   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:23.289773   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.289778   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.289782   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.289786   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.289789   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.289792   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.289799   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.289814   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.289827   78747 system_pods.go:74] duration metric: took 3.902040304s to wait for pod list to return data ...
	I0816 00:38:23.289836   78747 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:23.293498   78747 default_sa.go:45] found service account: "default"
	I0816 00:38:23.293528   78747 default_sa.go:55] duration metric: took 3.671585ms for default service account to be created ...
	I0816 00:38:23.293539   78747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:23.298509   78747 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:23.298534   78747 system_pods.go:89] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.298540   78747 system_pods.go:89] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.298545   78747 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.298549   78747 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.298552   78747 system_pods.go:89] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.298556   78747 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.298561   78747 system_pods.go:89] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.298567   78747 system_pods.go:89] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.298576   78747 system_pods.go:126] duration metric: took 5.030455ms to wait for k8s-apps to be running ...
	I0816 00:38:23.298585   78747 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:23.298632   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:23.318383   78747 system_svc.go:56] duration metric: took 19.787836ms WaitForService to wait for kubelet
	I0816 00:38:23.318419   78747 kubeadm.go:582] duration metric: took 4m23.105331758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:23.318446   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:23.322398   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:23.322425   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:23.322436   78747 node_conditions.go:105] duration metric: took 3.985107ms to run NodePressure ...
	I0816 00:38:23.322447   78747 start.go:241] waiting for startup goroutines ...
	I0816 00:38:23.322454   78747 start.go:246] waiting for cluster config update ...
	I0816 00:38:23.322464   78747 start.go:255] writing updated cluster config ...
	I0816 00:38:23.322801   78747 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:23.374057   78747 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:23.376186   78747 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-616827" cluster and "default" namespace by default
	I0816 00:38:21.788969   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:38:21.800050   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:38:21.811193   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:38:21.811216   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:38:21.811260   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:38:21.821328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:38:21.821391   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:38:21.831777   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:38:21.841357   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:38:21.841424   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:38:21.851564   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.861262   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:38:21.861322   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.871929   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:38:21.881544   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:38:21.881595   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:38:21.891725   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:38:22.120640   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:38:22.997351   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:25.494851   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:27.494976   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:29.495248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:31.994586   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:33.995565   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:36.494547   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:38.495194   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:40.995653   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:42.996593   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:45.495409   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:47.496072   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:49.997645   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:52.496097   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:54.994390   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:56.995869   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:58.996230   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:01.495217   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:02.989403   78489 pod_ready.go:82] duration metric: took 4m0.001106911s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	E0816 00:39:02.989435   78489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 00:39:02.989456   78489 pod_ready.go:39] duration metric: took 4m14.547419665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:02.989488   78489 kubeadm.go:597] duration metric: took 4m21.799297957s to restartPrimaryControlPlane
	W0816 00:39:02.989550   78489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:39:02.989582   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:39:29.166109   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.176504479s)
	I0816 00:39:29.166193   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:29.188082   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:39:29.207577   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:39:29.230485   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:39:29.230510   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:39:29.230564   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:39:29.242106   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:39:29.242177   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:39:29.258756   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:39:29.272824   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:39:29.272896   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:39:29.285574   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.294909   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:39:29.294985   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.304843   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:39:29.315125   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:39:29.315173   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:39:29.325422   78489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:39:29.375775   78489 kubeadm.go:310] W0816 00:39:29.358885    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.376658   78489 kubeadm.go:310] W0816 00:39:29.359753    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.504337   78489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:39:38.219769   78489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 00:39:38.219865   78489 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:39:38.219968   78489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:39:38.220094   78489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:39:38.220215   78489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 00:39:38.220302   78489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:39:38.221971   78489 out.go:235]   - Generating certificates and keys ...
	I0816 00:39:38.222037   78489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:39:38.222119   78489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:39:38.222234   78489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:39:38.222316   78489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:39:38.222430   78489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:39:38.222509   78489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:39:38.222584   78489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:39:38.222684   78489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:39:38.222767   78489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:39:38.222831   78489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:39:38.222862   78489 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:39:38.222943   78489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:39:38.223035   78489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:39:38.223121   78489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 00:39:38.223212   78489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:39:38.223299   78489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:39:38.223355   78489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:39:38.223452   78489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:39:38.223534   78489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:39:38.225012   78489 out.go:235]   - Booting up control plane ...
	I0816 00:39:38.225086   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:39:38.225153   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:39:38.225211   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:39:38.225296   78489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:39:38.225366   78489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:39:38.225399   78489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:39:38.225542   78489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 00:39:38.225706   78489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 00:39:38.225803   78489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001324649s
	I0816 00:39:38.225917   78489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 00:39:38.226004   78489 kubeadm.go:310] [api-check] The API server is healthy after 5.001672205s
	I0816 00:39:38.226125   78489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 00:39:38.226267   78489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 00:39:38.226352   78489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 00:39:38.226537   78489 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-819398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 00:39:38.226620   78489 kubeadm.go:310] [bootstrap-token] Using token: 4qqrpj.xeaneqftblh8gcp3
	I0816 00:39:38.227962   78489 out.go:235]   - Configuring RBAC rules ...
	I0816 00:39:38.228060   78489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 00:39:38.228140   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 00:39:38.228290   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 00:39:38.228437   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 00:39:38.228558   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 00:39:38.228697   78489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 00:39:38.228877   78489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 00:39:38.228942   78489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 00:39:38.229000   78489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 00:39:38.229010   78489 kubeadm.go:310] 
	I0816 00:39:38.229086   78489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 00:39:38.229096   78489 kubeadm.go:310] 
	I0816 00:39:38.229160   78489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 00:39:38.229166   78489 kubeadm.go:310] 
	I0816 00:39:38.229186   78489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 00:39:38.229252   78489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 00:39:38.229306   78489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 00:39:38.229312   78489 kubeadm.go:310] 
	I0816 00:39:38.229361   78489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 00:39:38.229367   78489 kubeadm.go:310] 
	I0816 00:39:38.229403   78489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 00:39:38.229408   78489 kubeadm.go:310] 
	I0816 00:39:38.229447   78489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 00:39:38.229504   78489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 00:39:38.229562   78489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 00:39:38.229567   78489 kubeadm.go:310] 
	I0816 00:39:38.229636   78489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 00:39:38.229701   78489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 00:39:38.229707   78489 kubeadm.go:310] 
	I0816 00:39:38.229793   78489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.229925   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0816 00:39:38.229954   78489 kubeadm.go:310] 	--control-plane 
	I0816 00:39:38.229960   78489 kubeadm.go:310] 
	I0816 00:39:38.230029   78489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 00:39:38.230038   78489 kubeadm.go:310] 
	I0816 00:39:38.230109   78489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.230211   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0816 00:39:38.230223   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:39:38.230232   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:39:38.231742   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:39:38.233079   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:39:38.245435   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:39:38.269502   78489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:39:38.269566   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.269593   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-819398 minikube.k8s.io/updated_at=2024_08_16T00_39_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=no-preload-819398 minikube.k8s.io/primary=true
	I0816 00:39:38.304272   78489 ops.go:34] apiserver oom_adj: -16
	I0816 00:39:38.485643   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.986569   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.486177   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.985737   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.486311   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.985981   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.486071   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.986414   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.486292   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.603092   78489 kubeadm.go:1113] duration metric: took 4.333590575s to wait for elevateKubeSystemPrivileges
	I0816 00:39:42.603133   78489 kubeadm.go:394] duration metric: took 5m1.4690157s to StartCluster
	I0816 00:39:42.603158   78489 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.603258   78489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:39:42.604833   78489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.605072   78489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:39:42.605133   78489 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:39:42.605219   78489 addons.go:69] Setting storage-provisioner=true in profile "no-preload-819398"
	I0816 00:39:42.605254   78489 addons.go:234] Setting addon storage-provisioner=true in "no-preload-819398"
	I0816 00:39:42.605251   78489 addons.go:69] Setting default-storageclass=true in profile "no-preload-819398"
	I0816 00:39:42.605259   78489 addons.go:69] Setting metrics-server=true in profile "no-preload-819398"
	I0816 00:39:42.605295   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:39:42.605308   78489 addons.go:234] Setting addon metrics-server=true in "no-preload-819398"
	I0816 00:39:42.605309   78489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-819398"
	W0816 00:39:42.605320   78489 addons.go:243] addon metrics-server should already be in state true
	W0816 00:39:42.605266   78489 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:39:42.605355   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605370   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605697   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605717   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605731   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605735   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605777   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605837   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.606458   78489 out.go:177] * Verifying Kubernetes components...
	I0816 00:39:42.607740   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:39:42.622512   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0816 00:39:42.623130   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.623697   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.623720   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.624070   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.624666   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.624695   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.626221   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0816 00:39:42.626220   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0816 00:39:42.626608   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.626695   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.627158   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627179   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627329   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627346   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627490   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.627696   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.628049   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.628165   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.628189   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.632500   78489 addons.go:234] Setting addon default-storageclass=true in "no-preload-819398"
	W0816 00:39:42.632523   78489 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:39:42.632554   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.632897   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.632928   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.644779   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0816 00:39:42.645422   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.645995   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.646026   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.646395   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.646607   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.646960   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0816 00:39:42.647374   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.648126   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.648141   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.648471   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.649494   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.649732   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.651509   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.651600   78489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:39:42.652823   78489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:39:42.652936   78489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:42.652951   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:39:42.652970   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654197   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:39:42.654217   78489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:39:42.654234   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654380   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0816 00:39:42.654812   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.655316   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.655332   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.655784   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.656330   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.656356   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.659148   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659319   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659629   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659648   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659776   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659794   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659959   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660138   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660164   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660330   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660444   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660478   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660587   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.660583   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.674431   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0816 00:39:42.674827   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.675399   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.675420   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.675756   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.675993   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.677956   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.678195   78489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:42.678211   78489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:39:42.678230   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.681163   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681593   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.681615   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681916   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.682099   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.682197   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.682276   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.822056   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:39:42.840356   78489 node_ready.go:35] waiting up to 6m0s for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852864   78489 node_ready.go:49] node "no-preload-819398" has status "Ready":"True"
	I0816 00:39:42.852887   78489 node_ready.go:38] duration metric: took 12.497677ms for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852899   78489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:42.866637   78489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:42.908814   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:39:42.908832   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:39:42.949047   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:39:42.949070   78489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:39:42.959159   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:43.021536   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.021557   78489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:39:43.068214   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:43.082144   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.243834   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.243857   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244177   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244192   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.244201   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.244212   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244451   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244505   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.250358   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.250376   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.250608   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:43.250648   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.250656   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419115   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.350866587s)
	I0816 00:39:44.419166   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419175   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419519   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419545   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419542   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419561   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419573   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419824   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419836   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419851   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.436623   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.354435707s)
	I0816 00:39:44.436682   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.436697   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437131   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437150   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437160   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.437169   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437207   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.437495   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437517   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437528   78489 addons.go:475] Verifying addon metrics-server=true in "no-preload-819398"
	I0816 00:39:44.439622   78489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 00:39:44.441097   78489 addons.go:510] duration metric: took 1.835961958s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 00:39:44.878479   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:47.373009   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:49.380832   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:50.372883   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.372919   78489 pod_ready.go:82] duration metric: took 7.506242182s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.372933   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378463   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.378486   78489 pod_ready.go:82] duration metric: took 5.546402ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378496   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383347   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.383364   78489 pod_ready.go:82] duration metric: took 4.862995ms for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383374   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387672   78489 pod_ready.go:93] pod "kube-proxy-nl7g6" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.387693   78489 pod_ready.go:82] duration metric: took 4.312811ms for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387703   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391921   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.391939   78489 pod_ready.go:82] duration metric: took 4.229092ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391945   78489 pod_ready.go:39] duration metric: took 7.539034647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:50.391958   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:39:50.392005   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:39:50.407980   78489 api_server.go:72] duration metric: took 7.802877941s to wait for apiserver process to appear ...
	I0816 00:39:50.408017   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:39:50.408039   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:39:50.412234   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:39:50.413278   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:39:50.413297   78489 api_server.go:131] duration metric: took 5.273051ms to wait for apiserver health ...
	I0816 00:39:50.413304   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:39:50.573185   78489 system_pods.go:59] 9 kube-system pods found
	I0816 00:39:50.573226   78489 system_pods.go:61] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.573233   78489 system_pods.go:61] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.573239   78489 system_pods.go:61] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.573244   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.573250   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.573257   78489 system_pods.go:61] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.573262   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.573271   78489 system_pods.go:61] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.573278   78489 system_pods.go:61] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.573288   78489 system_pods.go:74] duration metric: took 159.97729ms to wait for pod list to return data ...
	I0816 00:39:50.573301   78489 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:39:50.771164   78489 default_sa.go:45] found service account: "default"
	I0816 00:39:50.771189   78489 default_sa.go:55] duration metric: took 197.881739ms for default service account to be created ...
	I0816 00:39:50.771198   78489 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:39:50.973415   78489 system_pods.go:86] 9 kube-system pods found
	I0816 00:39:50.973448   78489 system_pods.go:89] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.973453   78489 system_pods.go:89] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.973457   78489 system_pods.go:89] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.973461   78489 system_pods.go:89] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.973465   78489 system_pods.go:89] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.973468   78489 system_pods.go:89] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.973471   78489 system_pods.go:89] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.973477   78489 system_pods.go:89] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.973482   78489 system_pods.go:89] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.973491   78489 system_pods.go:126] duration metric: took 202.288008ms to wait for k8s-apps to be running ...
	I0816 00:39:50.973498   78489 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:39:50.973539   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:50.989562   78489 system_svc.go:56] duration metric: took 16.053781ms WaitForService to wait for kubelet
	I0816 00:39:50.989595   78489 kubeadm.go:582] duration metric: took 8.384495377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:39:50.989618   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:39:51.171076   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:39:51.171109   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:39:51.171120   78489 node_conditions.go:105] duration metric: took 181.496732ms to run NodePressure ...
	I0816 00:39:51.171134   78489 start.go:241] waiting for startup goroutines ...
	I0816 00:39:51.171144   78489 start.go:246] waiting for cluster config update ...
	I0816 00:39:51.171157   78489 start.go:255] writing updated cluster config ...
	I0816 00:39:51.171465   78489 ssh_runner.go:195] Run: rm -f paused
	I0816 00:39:51.220535   78489 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:39:51.223233   78489 out.go:177] * Done! kubectl is now configured to use "no-preload-819398" cluster and "default" namespace by default
	I0816 00:40:18.143220   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:40:18.143333   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:40:18.144757   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.144804   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.144888   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.145018   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.145134   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:18.145210   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:18.146791   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:18.146879   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:18.146965   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:18.147072   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:18.147164   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:18.147258   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:18.147340   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:18.147434   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:18.147525   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:18.147613   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:18.147708   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:18.147744   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:18.147791   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:18.147839   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:18.147916   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:18.147989   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:18.148045   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:18.148194   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:18.148318   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:18.148365   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:18.148458   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:18.149817   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:18.149941   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:18.150044   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:18.150107   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:18.150187   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:18.150323   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:40:18.150380   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:40:18.150460   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150671   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.150766   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150953   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151033   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151232   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151305   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151520   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151614   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151840   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151856   79191 kubeadm.go:310] 
	I0816 00:40:18.151917   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:40:18.151978   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:40:18.151992   79191 kubeadm.go:310] 
	I0816 00:40:18.152046   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:40:18.152097   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:40:18.152204   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:40:18.152218   79191 kubeadm.go:310] 
	I0816 00:40:18.152314   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:40:18.152349   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:40:18.152377   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:40:18.152384   79191 kubeadm.go:310] 
	I0816 00:40:18.152466   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:40:18.152537   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:40:18.152543   79191 kubeadm.go:310] 
	I0816 00:40:18.152674   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:40:18.152769   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:40:18.152853   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:40:18.152914   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:40:18.152978   79191 kubeadm.go:310] 
	W0816 00:40:18.153019   79191 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:40:18.153055   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:40:18.634058   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:40:18.648776   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:40:18.659504   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:40:18.659529   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:40:18.659584   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:40:18.670234   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:40:18.670285   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:40:18.680370   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:40:18.689496   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:40:18.689557   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:40:18.698949   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.708056   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:40:18.708118   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.718261   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:40:18.728708   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:40:18.728777   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:40:18.739253   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:40:18.819666   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.819746   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.966568   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.966704   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.966868   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:19.168323   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:19.170213   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:19.170335   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:19.170464   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:19.170546   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:19.170598   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:19.170670   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:19.170740   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:19.170828   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:19.170924   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:19.171031   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:19.171129   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:19.171179   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:19.171261   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:19.421256   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:19.585260   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:19.672935   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:19.928620   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:19.952420   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:19.953527   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:19.953578   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:20.090384   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:20.092904   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:20.093037   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:20.105743   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:20.106980   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:20.108199   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:20.111014   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:41:00.113053   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:41:00.113479   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:00.113752   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:05.113795   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:05.114091   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:15.114695   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:15.114932   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:35.116019   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:35.116207   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.116728   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:42:15.116994   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.117018   79191 kubeadm.go:310] 
	I0816 00:42:15.117071   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:42:15.117136   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:42:15.117147   79191 kubeadm.go:310] 
	I0816 00:42:15.117198   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:42:15.117248   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:42:15.117402   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:42:15.117412   79191 kubeadm.go:310] 
	I0816 00:42:15.117543   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:42:15.117601   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:42:15.117636   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:42:15.117644   79191 kubeadm.go:310] 
	I0816 00:42:15.117778   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:42:15.117918   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:42:15.117929   79191 kubeadm.go:310] 
	I0816 00:42:15.118083   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:42:15.118215   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:42:15.118313   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:42:15.118412   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:42:15.118433   79191 kubeadm.go:310] 
	I0816 00:42:15.118582   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:42:15.118698   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:42:15.118843   79191 kubeadm.go:394] duration metric: took 8m2.460648867s to StartCluster
	I0816 00:42:15.118855   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:42:15.118891   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:42:15.118957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:42:15.162809   79191 cri.go:89] found id: ""
	I0816 00:42:15.162837   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.162848   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:42:15.162855   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:42:15.162925   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:42:15.198020   79191 cri.go:89] found id: ""
	I0816 00:42:15.198042   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.198053   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:42:15.198063   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:42:15.198132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:42:15.238168   79191 cri.go:89] found id: ""
	I0816 00:42:15.238197   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.238206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:42:15.238213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:42:15.238273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:42:15.278364   79191 cri.go:89] found id: ""
	I0816 00:42:15.278391   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.278401   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:42:15.278407   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:42:15.278465   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:42:15.316182   79191 cri.go:89] found id: ""
	I0816 00:42:15.316209   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.316216   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:42:15.316222   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:42:15.316278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:42:15.352934   79191 cri.go:89] found id: ""
	I0816 00:42:15.352962   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.352970   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:42:15.352976   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:42:15.353031   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:42:15.388940   79191 cri.go:89] found id: ""
	I0816 00:42:15.388966   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.388973   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:42:15.388983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:42:15.389042   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:42:15.424006   79191 cri.go:89] found id: ""
	I0816 00:42:15.424035   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.424043   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:42:15.424054   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:42:15.424073   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:42:15.504823   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:42:15.504846   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:42:15.504858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:42:15.608927   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:42:15.608959   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:42:15.676785   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:42:15.676810   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:42:15.744763   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:42:15.744805   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0816 00:42:15.760944   79191 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:42:15.761012   79191 out.go:270] * 
	W0816 00:42:15.761078   79191 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.761098   79191 out.go:270] * 
	W0816 00:42:15.762220   79191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:42:15.765697   79191 out.go:201] 
	W0816 00:42:15.766942   79191 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.767018   79191 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:42:15.767040   79191 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:42:15.768526   79191 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.535371976Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769245535343032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da58b1f6-335e-45a2-bcd2-d649b4fdaea8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.536036387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08bd5590-3b84-4692-b27c-447ed3896b88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.536150770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08bd5590-3b84-4692-b27c-447ed3896b88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.536363465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768468688541660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f014e6cda883e1be849366a2984c3f9f80db9a87d96485de121db9c754b4dac7,PodSandboxId:69b55dbd9e253a720509e9a771d0c2fcc2f04a040953538851d503ffd85121e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768446956220991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44031c7f-e317-4703-aab3-50572aae00c2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c,PodSandboxId:6a331d270c6f2e515692365fdf220ed7c2bd679ea0a7e9235f6a77988827201c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768445697494669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4n9qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611de0e-5480-4841-bfb5-68050fa068aa,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8,PodSandboxId:306632430a90e1623825395b3f2e25a8ada85715621156079531fcd81637da13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768437945514172,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8f9913-5
496-4fda-800e-c942e714f13e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768437889693500,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-
c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87,PodSandboxId:7c2e6768a141badbb09ec9f4e6a4923bf3120cd0def717d5b018008ffa5d64ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768433147183991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656000523d0c38f28776f138cadf7775,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60,PodSandboxId:92ed2606bf7babe56c413aaa4a3ebaca03052e6f0c12c046cbff2d1a11814de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768433119357113,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f07b65b5b4891ed9946624fdc67020,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46,PodSandboxId:81667bd6b6c80b2d134d3735979e4059be1a5c6b0671b2cb1665a5dc21af860c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768433093688159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1376704204a85444fb745b41bd56a466,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86,PodSandboxId:c52e756e9d40c87a3a35388b00547a911f122aa5a17fd6456f28ecc6c19441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768433126966873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e61dbeec6c5826180b0c3cc193efb
0,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08bd5590-3b84-4692-b27c-447ed3896b88 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.581512356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca14e5e6-d2fd-4eec-b454-a351181ac92a name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.581605958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca14e5e6-d2fd-4eec-b454-a351181ac92a name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.582480346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03ef4f5e-1f23-477a-b67e-c30ee90427aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.582881990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769245582857865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03ef4f5e-1f23-477a-b67e-c30ee90427aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.583501626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71c2eebe-2d73-42d6-92f1-c77672445a73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.583580298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71c2eebe-2d73-42d6-92f1-c77672445a73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.583912183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768468688541660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f014e6cda883e1be849366a2984c3f9f80db9a87d96485de121db9c754b4dac7,PodSandboxId:69b55dbd9e253a720509e9a771d0c2fcc2f04a040953538851d503ffd85121e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768446956220991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44031c7f-e317-4703-aab3-50572aae00c2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c,PodSandboxId:6a331d270c6f2e515692365fdf220ed7c2bd679ea0a7e9235f6a77988827201c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768445697494669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4n9qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611de0e-5480-4841-bfb5-68050fa068aa,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8,PodSandboxId:306632430a90e1623825395b3f2e25a8ada85715621156079531fcd81637da13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768437945514172,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8f9913-5
496-4fda-800e-c942e714f13e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768437889693500,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-
c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87,PodSandboxId:7c2e6768a141badbb09ec9f4e6a4923bf3120cd0def717d5b018008ffa5d64ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768433147183991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656000523d0c38f28776f138cadf7775,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60,PodSandboxId:92ed2606bf7babe56c413aaa4a3ebaca03052e6f0c12c046cbff2d1a11814de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768433119357113,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f07b65b5b4891ed9946624fdc67020,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46,PodSandboxId:81667bd6b6c80b2d134d3735979e4059be1a5c6b0671b2cb1665a5dc21af860c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768433093688159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1376704204a85444fb745b41bd56a466,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86,PodSandboxId:c52e756e9d40c87a3a35388b00547a911f122aa5a17fd6456f28ecc6c19441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768433126966873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e61dbeec6c5826180b0c3cc193efb
0,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71c2eebe-2d73-42d6-92f1-c77672445a73 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.623562690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4131ef14-bd1e-4ae1-9183-f8112e956035 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.623653164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4131ef14-bd1e-4ae1-9183-f8112e956035 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.624716490Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=197b1dbc-e103-485d-a76e-681af2aab804 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.625184783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769245625156293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=197b1dbc-e103-485d-a76e-681af2aab804 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.625704517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f075ffd6-0e5a-4cc7-bcc4-e45fe742d74e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.625765180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f075ffd6-0e5a-4cc7-bcc4-e45fe742d74e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.625985084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768468688541660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f014e6cda883e1be849366a2984c3f9f80db9a87d96485de121db9c754b4dac7,PodSandboxId:69b55dbd9e253a720509e9a771d0c2fcc2f04a040953538851d503ffd85121e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768446956220991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44031c7f-e317-4703-aab3-50572aae00c2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c,PodSandboxId:6a331d270c6f2e515692365fdf220ed7c2bd679ea0a7e9235f6a77988827201c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768445697494669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4n9qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611de0e-5480-4841-bfb5-68050fa068aa,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8,PodSandboxId:306632430a90e1623825395b3f2e25a8ada85715621156079531fcd81637da13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768437945514172,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8f9913-5
496-4fda-800e-c942e714f13e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768437889693500,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-
c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87,PodSandboxId:7c2e6768a141badbb09ec9f4e6a4923bf3120cd0def717d5b018008ffa5d64ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768433147183991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656000523d0c38f28776f138cadf7775,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60,PodSandboxId:92ed2606bf7babe56c413aaa4a3ebaca03052e6f0c12c046cbff2d1a11814de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768433119357113,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f07b65b5b4891ed9946624fdc67020,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46,PodSandboxId:81667bd6b6c80b2d134d3735979e4059be1a5c6b0671b2cb1665a5dc21af860c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768433093688159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1376704204a85444fb745b41bd56a466,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86,PodSandboxId:c52e756e9d40c87a3a35388b00547a911f122aa5a17fd6456f28ecc6c19441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768433126966873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e61dbeec6c5826180b0c3cc193efb
0,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f075ffd6-0e5a-4cc7-bcc4-e45fe742d74e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.660834244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=776b2145-6fbe-4f21-9375-b94946563e45 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.660921147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=776b2145-6fbe-4f21-9375-b94946563e45 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.662610054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d72bd2c-3fda-48cd-97c4-a21da0c3997b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.662995365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769245662971596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d72bd2c-3fda-48cd-97c4-a21da0c3997b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.663826294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5cb0fbd-cddc-4297-b9f1-4566796cf187 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.663895406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5cb0fbd-cddc-4297-b9f1-4566796cf187 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:47:25 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:47:25.664153801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768468688541660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f014e6cda883e1be849366a2984c3f9f80db9a87d96485de121db9c754b4dac7,PodSandboxId:69b55dbd9e253a720509e9a771d0c2fcc2f04a040953538851d503ffd85121e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768446956220991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44031c7f-e317-4703-aab3-50572aae00c2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c,PodSandboxId:6a331d270c6f2e515692365fdf220ed7c2bd679ea0a7e9235f6a77988827201c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768445697494669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4n9qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611de0e-5480-4841-bfb5-68050fa068aa,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8,PodSandboxId:306632430a90e1623825395b3f2e25a8ada85715621156079531fcd81637da13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768437945514172,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8f9913-5
496-4fda-800e-c942e714f13e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768437889693500,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-
c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87,PodSandboxId:7c2e6768a141badbb09ec9f4e6a4923bf3120cd0def717d5b018008ffa5d64ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768433147183991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656000523d0c38f28776f138cadf7775,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60,PodSandboxId:92ed2606bf7babe56c413aaa4a3ebaca03052e6f0c12c046cbff2d1a11814de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768433119357113,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f07b65b5b4891ed9946624fdc67020,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46,PodSandboxId:81667bd6b6c80b2d134d3735979e4059be1a5c6b0671b2cb1665a5dc21af860c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768433093688159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1376704204a85444fb745b41bd56a466,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86,PodSandboxId:c52e756e9d40c87a3a35388b00547a911f122aa5a17fd6456f28ecc6c19441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768433126966873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e61dbeec6c5826180b0c3cc193efb
0,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5cb0fbd-cddc-4297-b9f1-4566796cf187 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	31400c13619c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   8bcfb56712159       storage-provisioner
	f014e6cda883e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   69b55dbd9e253       busybox
	15fd3e395581c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   6a331d270c6f2       coredns-6f6b679f8f-4n9qq
	9821dfda7cc43       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   306632430a90e       kube-proxy-f99ds
	d624b2f88ce3e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8bcfb56712159       storage-provisioner
	d6e8ce8b4b577       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   7c2e6768a141b       etcd-default-k8s-diff-port-616827
	84380e27c5a9d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   c52e756e9d40c       kube-controller-manager-default-k8s-diff-port-616827
	eb4c36b11d03e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   92ed2606bf7ba       kube-scheduler-default-k8s-diff-port-616827
	169a7e51493aa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   81667bd6b6c80       kube-apiserver-default-k8s-diff-port-616827
	
	
	==> coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47115 - 59184 "HINFO IN 7431896370060291427.1225471116469602556. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0115711s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-616827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-616827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=default-k8s-diff-port-616827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T00_25_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 00:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-616827
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:47:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 00:44:40 +0000   Fri, 16 Aug 2024 00:25:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 00:44:40 +0000   Fri, 16 Aug 2024 00:25:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 00:44:40 +0000   Fri, 16 Aug 2024 00:25:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 00:44:40 +0000   Fri, 16 Aug 2024 00:34:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.128
	  Hostname:    default-k8s-diff-port-616827
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f95a7a8d850e42cfb3645ab68eaceaa1
	  System UUID:                f95a7a8d-850e-42cf-b364-5ab68eaceaa1
	  Boot ID:                    86338a5c-695d-45f2-a39b-7f70b63f7a54
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-6f6b679f8f-4n9qq                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-616827                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-616827             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-616827    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-f99ds                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-616827             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-sxqkg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-616827 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-616827 event: Registered Node default-k8s-diff-port-616827 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-616827 event: Registered Node default-k8s-diff-port-616827 in Controller
	
	
	==> dmesg <==
	[Aug16 00:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053524] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040087] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.943170] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.531616] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609508] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.348477] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.070912] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073416] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.203791] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.167481] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.338843] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +4.329915] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.069062] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.239626] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +5.619659] kauditd_printk_skb: 97 callbacks suppressed
	[Aug16 00:34] systemd-fstab-generator[1562]: Ignoring "noauto" option for root device
	[  +4.173727] kauditd_printk_skb: 64 callbacks suppressed
	[ +24.235315] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] <==
	{"level":"info","ts":"2024-08-16T00:33:55.438766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:55.438790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 received MsgVoteResp from b7d726258a4a2d44 at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:55.438817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:55.438843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b7d726258a4a2d44 elected leader b7d726258a4a2d44 at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:55.442150Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b7d726258a4a2d44","local-member-attributes":"{Name:default-k8s-diff-port-616827 ClientURLs:[https://192.168.50.128:2379]}","request-path":"/0/members/b7d726258a4a2d44/attributes","cluster-id":"cd7de093209a1f5d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T00:33:55.442226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:33:55.442474Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:33:55.442750Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T00:33:55.442788Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T00:33:55.443500Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:33:55.443640Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:33:55.444428Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.128:2379"}
	{"level":"info","ts":"2024-08-16T00:33:55.444430Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T00:34:10.896305Z","caller":"traceutil/trace.go:171","msg":"trace[1004334470] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"128.08971ms","start":"2024-08-16T00:34:10.768203Z","end":"2024-08-16T00:34:10.896293Z","steps":["trace[1004334470] 'process raft request'  (duration: 127.971219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T00:34:11.333158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.72327ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261891839865383082 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" mod_revision:583 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" value_size:6543 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T00:34:11.333393Z","caller":"traceutil/trace.go:171","msg":"trace[1499638222] linearizableReadLoop","detail":"{readStateIndex:620; appliedIndex:619; }","duration":"343.931988ms","start":"2024-08-16T00:34:10.989447Z","end":"2024-08-16T00:34:11.333379Z","steps":["trace[1499638222] 'read index received'  (duration: 178.284911ms)","trace[1499638222] 'applied index is now lower than readState.Index'  (duration: 165.645474ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T00:34:11.333508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.050672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" ","response":"range_response_count:1 size:6645"}
	{"level":"info","ts":"2024-08-16T00:34:11.333538Z","caller":"traceutil/trace.go:171","msg":"trace[1006926640] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"411.636679ms","start":"2024-08-16T00:34:10.921888Z","end":"2024-08-16T00:34:11.333524Z","steps":["trace[1006926640] 'process raft request'  (duration: 245.892663ms)","trace[1006926640] 'compare'  (duration: 164.487514ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T00:34:11.333625Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T00:34:10.921867Z","time spent":"411.720179ms","remote":"127.0.0.1:50300","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6630,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" mod_revision:583 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" value_size:6543 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" > >"}
	{"level":"info","ts":"2024-08-16T00:34:11.333567Z","caller":"traceutil/trace.go:171","msg":"trace[1089771992] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827; range_end:; response_count:1; response_revision:584; }","duration":"344.115225ms","start":"2024-08-16T00:34:10.989443Z","end":"2024-08-16T00:34:11.333558Z","steps":["trace[1089771992] 'agreement among raft nodes before linearized reading'  (duration: 344.010429ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T00:34:11.333749Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T00:34:10.989401Z","time spent":"344.335576ms","remote":"127.0.0.1:50300","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":6669,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" "}
	{"level":"warn","ts":"2024-08-16T00:34:11.853655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.370284ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261891839865383094 > lease_revoke:<id:2d4491589352f5fe>","response":"size:29"}
	{"level":"info","ts":"2024-08-16T00:43:55.473133Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":827}
	{"level":"info","ts":"2024-08-16T00:43:55.483808Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":827,"took":"10.210583ms","hash":560694949,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2740224,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-16T00:43:55.483901Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":560694949,"revision":827,"compact-revision":-1}
	
	
	==> kernel <==
	 00:47:25 up 13 min,  0 users,  load average: 0.26, 0.18, 0.11
	Linux default-k8s-diff-port-616827 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 00:43:57.844898       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:43:57.845196       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 00:43:57.846444       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:43:57.846530       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:44:57.846640       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 00:44:57.846713       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:44:57.846925       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0816 00:44:57.846889       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:44:57.848139       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:44:57.848146       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:46:57.849205       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:46:57.849356       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 00:46:57.849407       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:46:57.849453       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:46:57.850795       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:46:57.850851       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] <==
	E0816 00:42:00.337394       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:42:00.897726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:42:30.343832       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:42:30.905143       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:43:00.349666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:43:00.913235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:43:30.356149       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:43:30.921147       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:44:00.362743       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:44:00.928592       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:44:30.368849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:44:30.936687       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:44:40.417256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-616827"
	E0816 00:45:00.378624       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:45:00.943901       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:45:11.474779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="265.384µs"
	I0816 00:45:26.479732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="84.636µs"
	E0816 00:45:30.385317       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:45:30.953708       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:46:00.392244       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:46:00.962780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:46:30.398555       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:46:30.971819       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:47:00.405677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:47:00.979631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 00:33:58.193675       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 00:33:58.205818       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.128"]
	E0816 00:33:58.205892       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 00:33:58.241695       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 00:33:58.241774       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 00:33:58.241805       1 server_linux.go:169] "Using iptables Proxier"
	I0816 00:33:58.244716       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 00:33:58.244984       1 server.go:483] "Version info" version="v1.31.0"
	I0816 00:33:58.245011       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:33:58.246550       1 config.go:197] "Starting service config controller"
	I0816 00:33:58.246590       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 00:33:58.246612       1 config.go:104] "Starting endpoint slice config controller"
	I0816 00:33:58.246615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 00:33:58.247592       1 config.go:326] "Starting node config controller"
	I0816 00:33:58.247673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 00:33:58.347018       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 00:33:58.347125       1 shared_informer.go:320] Caches are synced for service config
	I0816 00:33:58.348454       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] <==
	I0816 00:33:54.052741       1 serving.go:386] Generated self-signed cert in-memory
	W0816 00:33:56.776593       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 00:33:56.776723       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 00:33:56.776816       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 00:33:56.776842       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 00:33:56.827441       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 00:33:56.827490       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:33:56.835559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 00:33:56.835664       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 00:33:56.835692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 00:33:56.835816       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 00:33:56.937583       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 00:46:19 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:19.459602     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:46:22 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:22.672033     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769182671438019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:22 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:22.673388     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769182671438019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:32 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:32.460344     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:46:32 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:32.675645     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769192675044023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:32 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:32.675690     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769192675044023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:42 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:42.677322     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769202676846653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:42 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:42.677368     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769202676846653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:43 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:43.459865     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:46:52 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:52.499635     934 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 00:46:52 default-k8s-diff-port-616827 kubelet[934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 00:46:52 default-k8s-diff-port-616827 kubelet[934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 00:46:52 default-k8s-diff-port-616827 kubelet[934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 00:46:52 default-k8s-diff-port-616827 kubelet[934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 00:46:52 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:52.679041     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769212678660961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:52 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:52.679132     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769212678660961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:46:58 default-k8s-diff-port-616827 kubelet[934]: E0816 00:46:58.463477     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:47:02 default-k8s-diff-port-616827 kubelet[934]: E0816 00:47:02.680850     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769222680424794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:02 default-k8s-diff-port-616827 kubelet[934]: E0816 00:47:02.680944     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769222680424794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:09 default-k8s-diff-port-616827 kubelet[934]: E0816 00:47:09.459426     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:47:12 default-k8s-diff-port-616827 kubelet[934]: E0816 00:47:12.682700     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769232681946350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:12 default-k8s-diff-port-616827 kubelet[934]: E0816 00:47:12.683319     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769232681946350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:21 default-k8s-diff-port-616827 kubelet[934]: E0816 00:47:21.461871     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:47:22 default-k8s-diff-port-616827 kubelet[934]: E0816 00:47:22.685989     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769242685482497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:22 default-k8s-diff-port-616827 kubelet[934]: E0816 00:47:22.686018     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769242685482497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] <==
	I0816 00:34:28.788543       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 00:34:28.798932       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 00:34:28.799199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 00:34:46.197518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 00:34:46.197796       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-616827_53b2e7df-3a1c-4ab3-8ea1-e7f4c14435eb!
	I0816 00:34:46.199233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbab41f0-f88f-4bae-ac33-357844cf541c", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-616827_53b2e7df-3a1c-4ab3-8ea1-e7f4c14435eb became leader
	I0816 00:34:46.298888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-616827_53b2e7df-3a1c-4ab3-8ea1-e7f4c14435eb!
	
	
	==> storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] <==
	I0816 00:33:58.101946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 00:34:28.104915       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-616827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-sxqkg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-616827 describe pod metrics-server-6867b74b74-sxqkg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-616827 describe pod metrics-server-6867b74b74-sxqkg: exit status 1 (62.028204ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-sxqkg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-616827 describe pod metrics-server-6867b74b74-sxqkg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0816 00:39:53.799521   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:40:25.212353   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:40:54.235638   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:41:31.508957   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:41:48.276851   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:42:09.801565   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-819398 -n no-preload-819398
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-16 00:48:51.741534369 +0000 UTC m=+6206.322113343
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-819398 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-819398 logs -n 25: (2.202444621s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-697641 sudo cat                              | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo find                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo crio                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-697641                                       | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-067133 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | disable-driver-mounts-067133                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:25 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-819398             | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC | 16 Aug 24 00:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-758469            | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-616827  | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:29:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:29:51.785297   79191 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:29:51.785388   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785392   79191 out.go:358] Setting ErrFile to fd 2...
	I0816 00:29:51.785396   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785578   79191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:29:51.786145   79191 out.go:352] Setting JSON to false
	I0816 00:29:51.787066   79191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7892,"bootTime":1723760300,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:29:51.787122   79191 start.go:139] virtualization: kvm guest
	I0816 00:29:51.789057   79191 out.go:177] * [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:29:51.790274   79191 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:29:51.790269   79191 notify.go:220] Checking for updates...
	I0816 00:29:51.792828   79191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:29:51.794216   79191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:29:51.795553   79191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:29:51.796761   79191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:29:51.798018   79191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:29:51.799561   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:29:51.799935   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.799990   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.814617   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0816 00:29:51.815056   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.815584   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.815606   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.815933   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.816131   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:51.817809   79191 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 00:29:51.819204   79191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:29:51.819604   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.819652   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.834270   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0816 00:29:51.834584   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.834992   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.835015   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.835303   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.835478   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:49.226097   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:51.870472   79191 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:29:51.872031   79191 start.go:297] selected driver: kvm2
	I0816 00:29:51.872049   79191 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.872137   79191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:29:51.872785   79191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.872848   79191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:29:51.887731   79191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:29:51.888078   79191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:29:51.888141   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:29:51.888154   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:29:51.888203   79191 start.go:340] cluster config:
	{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.888300   79191 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.890190   79191 out.go:177] * Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	I0816 00:29:51.891529   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:29:51.891557   79191 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:29:51.891565   79191 cache.go:56] Caching tarball of preloaded images
	I0816 00:29:51.891645   79191 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:29:51.891664   79191 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:29:51.891747   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:29:51.891915   79191 start.go:360] acquireMachinesLock for old-k8s-version-098619: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:29:55.306158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:58.378266   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:04.458137   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:07.530158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:13.610160   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:16.682057   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:22.762088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:25.834157   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:31.914106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:34.986091   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:41.066143   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:44.138152   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:50.218140   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:53.290166   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:59.370080   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:02.442130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:08.522126   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:11.594144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:17.674104   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:20.746185   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:26.826131   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:29.898113   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:35.978100   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:39.050136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:45.130120   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:48.202078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:54.282078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:57.354088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:03.434136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:06.506153   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:12.586125   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:15.658144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:21.738130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:24.810191   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:30.890130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:33.962132   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:40.042062   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:43.114154   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:49.194151   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:52.266130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:58.346106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:01.418139   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:04.422042   78713 start.go:364] duration metric: took 4m25.166768519s to acquireMachinesLock for "embed-certs-758469"
	I0816 00:33:04.422099   78713 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:04.422107   78713 fix.go:54] fixHost starting: 
	I0816 00:33:04.422426   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:04.422458   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:04.437335   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I0816 00:33:04.437779   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:04.438284   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:04.438306   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:04.438646   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:04.438873   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:04.439045   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:04.440597   78713 fix.go:112] recreateIfNeeded on embed-certs-758469: state=Stopped err=<nil>
	I0816 00:33:04.440627   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	W0816 00:33:04.440781   78713 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:04.442527   78713 out.go:177] * Restarting existing kvm2 VM for "embed-certs-758469" ...
	I0816 00:33:04.419735   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:04.419772   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420077   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:33:04.420102   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420299   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:33:04.421914   78489 machine.go:96] duration metric: took 4m37.429789672s to provisionDockerMachine
	I0816 00:33:04.421957   78489 fix.go:56] duration metric: took 4m37.451098771s for fixHost
	I0816 00:33:04.421965   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 4m37.451130669s
	W0816 00:33:04.421995   78489 start.go:714] error starting host: provision: host is not running
	W0816 00:33:04.422099   78489 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 00:33:04.422111   78489 start.go:729] Will try again in 5 seconds ...
	I0816 00:33:04.443838   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Start
	I0816 00:33:04.444035   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring networks are active...
	I0816 00:33:04.444849   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network default is active
	I0816 00:33:04.445168   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network mk-embed-certs-758469 is active
	I0816 00:33:04.445491   78713 main.go:141] libmachine: (embed-certs-758469) Getting domain xml...
	I0816 00:33:04.446159   78713 main.go:141] libmachine: (embed-certs-758469) Creating domain...
	I0816 00:33:05.654817   78713 main.go:141] libmachine: (embed-certs-758469) Waiting to get IP...
	I0816 00:33:05.655625   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.656020   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.656064   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.655983   79868 retry.go:31] will retry after 273.341379ms: waiting for machine to come up
	I0816 00:33:05.930542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.931038   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.931061   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.931001   79868 retry.go:31] will retry after 320.172619ms: waiting for machine to come up
	I0816 00:33:06.252718   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.253117   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.253140   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.253091   79868 retry.go:31] will retry after 441.386495ms: waiting for machine to come up
	I0816 00:33:06.695681   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.696108   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.696134   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.696065   79868 retry.go:31] will retry after 491.272986ms: waiting for machine to come up
	I0816 00:33:07.188683   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.189070   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.189092   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.189025   79868 retry.go:31] will retry after 536.865216ms: waiting for machine to come up
	I0816 00:33:07.727831   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.728246   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.728276   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.728193   79868 retry.go:31] will retry after 813.064342ms: waiting for machine to come up
	I0816 00:33:08.543096   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:08.543605   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:08.543637   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:08.543549   79868 retry.go:31] will retry after 1.00495091s: waiting for machine to come up
	I0816 00:33:09.424586   78489 start.go:360] acquireMachinesLock for no-preload-819398: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:33:09.549815   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:09.550226   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:09.550255   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:09.550175   79868 retry.go:31] will retry after 1.483015511s: waiting for machine to come up
	I0816 00:33:11.034879   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:11.035277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:11.035315   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:11.035224   79868 retry.go:31] will retry after 1.513237522s: waiting for machine to come up
	I0816 00:33:12.550817   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:12.551172   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:12.551196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:12.551126   79868 retry.go:31] will retry after 1.483165174s: waiting for machine to come up
	I0816 00:33:14.036748   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:14.037142   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:14.037170   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:14.037087   79868 retry.go:31] will retry after 1.772679163s: waiting for machine to come up
	I0816 00:33:15.811699   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:15.812300   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:15.812334   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:15.812226   79868 retry.go:31] will retry after 3.026936601s: waiting for machine to come up
	I0816 00:33:18.842362   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:18.842759   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:18.842788   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:18.842715   79868 retry.go:31] will retry after 4.400445691s: waiting for machine to come up
	I0816 00:33:23.247813   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248223   78713 main.go:141] libmachine: (embed-certs-758469) Found IP for machine: 192.168.39.185
	I0816 00:33:23.248254   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has current primary IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248265   78713 main.go:141] libmachine: (embed-certs-758469) Reserving static IP address...
	I0816 00:33:23.248613   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.248641   78713 main.go:141] libmachine: (embed-certs-758469) DBG | skip adding static IP to network mk-embed-certs-758469 - found existing host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"}
	I0816 00:33:23.248654   78713 main.go:141] libmachine: (embed-certs-758469) Reserved static IP address: 192.168.39.185
	I0816 00:33:23.248673   78713 main.go:141] libmachine: (embed-certs-758469) Waiting for SSH to be available...
	I0816 00:33:23.248687   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Getting to WaitForSSH function...
	I0816 00:33:23.250607   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.250931   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.250965   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.251113   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH client type: external
	I0816 00:33:23.251141   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa (-rw-------)
	I0816 00:33:23.251179   78713 main.go:141] libmachine: (embed-certs-758469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:23.251196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | About to run SSH command:
	I0816 00:33:23.251211   78713 main.go:141] libmachine: (embed-certs-758469) DBG | exit 0
	I0816 00:33:23.373899   78713 main.go:141] libmachine: (embed-certs-758469) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:23.374270   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetConfigRaw
	I0816 00:33:23.374914   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.377034   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377343   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.377370   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377561   78713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/config.json ...
	I0816 00:33:23.377760   78713 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:23.377776   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:23.378014   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.379950   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380248   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.380277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380369   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.380524   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380668   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380795   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.380950   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.381134   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.381145   78713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:23.486074   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:23.486106   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486462   78713 buildroot.go:166] provisioning hostname "embed-certs-758469"
	I0816 00:33:23.486491   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486677   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.489520   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.489905   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.489924   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.490108   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.490279   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490427   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490566   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.490730   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.490901   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.490920   78713 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-758469 && echo "embed-certs-758469" | sudo tee /etc/hostname
	I0816 00:33:23.614635   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-758469
	
	I0816 00:33:23.614671   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.617308   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617673   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.617701   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617881   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.618087   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618351   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.618536   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.618721   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.618746   78713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-758469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-758469/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-758469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:23.734901   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:23.734931   78713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:23.734946   78713 buildroot.go:174] setting up certificates
	I0816 00:33:23.734953   78713 provision.go:84] configureAuth start
	I0816 00:33:23.734961   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.735255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.737952   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738312   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.738341   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738445   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.740589   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.740926   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.740953   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.741060   78713 provision.go:143] copyHostCerts
	I0816 00:33:23.741121   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:23.741138   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:23.741203   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:23.741357   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:23.741367   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:23.741393   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:23.741452   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:23.741458   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:23.741478   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:23.741525   78713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.embed-certs-758469 san=[127.0.0.1 192.168.39.185 embed-certs-758469 localhost minikube]
	I0816 00:33:23.871115   78713 provision.go:177] copyRemoteCerts
	I0816 00:33:23.871167   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:23.871190   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.874049   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874505   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.874538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874720   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.874913   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.875079   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.875210   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:23.959910   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:23.984454   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:33:24.009067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:24.036195   78713 provision.go:87] duration metric: took 301.229994ms to configureAuth
	I0816 00:33:24.036218   78713 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:24.036389   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:24.036453   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.039196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.039562   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039771   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.039970   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040125   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040224   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.040372   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.040584   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.040612   78713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:24.550693   78747 start.go:364] duration metric: took 4m44.527028624s to acquireMachinesLock for "default-k8s-diff-port-616827"
	I0816 00:33:24.550757   78747 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:24.550763   78747 fix.go:54] fixHost starting: 
	I0816 00:33:24.551164   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:24.551203   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:24.567741   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0816 00:33:24.568138   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:24.568674   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:33:24.568703   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:24.569017   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:24.569212   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:24.569385   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:33:24.570856   78747 fix.go:112] recreateIfNeeded on default-k8s-diff-port-616827: state=Stopped err=<nil>
	I0816 00:33:24.570901   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	W0816 00:33:24.571074   78747 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:24.572673   78747 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-616827" ...
	I0816 00:33:24.574220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Start
	I0816 00:33:24.574403   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring networks are active...
	I0816 00:33:24.575086   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network default is active
	I0816 00:33:24.575528   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network mk-default-k8s-diff-port-616827 is active
	I0816 00:33:24.576033   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Getting domain xml...
	I0816 00:33:24.576734   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Creating domain...
	I0816 00:33:24.314921   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:24.314951   78713 machine.go:96] duration metric: took 937.178488ms to provisionDockerMachine
	I0816 00:33:24.314964   78713 start.go:293] postStartSetup for "embed-certs-758469" (driver="kvm2")
	I0816 00:33:24.314974   78713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:24.315007   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.315405   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:24.315430   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.317962   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318242   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.318270   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318390   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.318588   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.318763   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.318900   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.400628   78713 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:24.405061   78713 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:24.405082   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:24.405148   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:24.405215   78713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:24.405302   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:24.414985   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:24.439646   78713 start.go:296] duration metric: took 124.668147ms for postStartSetup
	I0816 00:33:24.439692   78713 fix.go:56] duration metric: took 20.017583324s for fixHost
	I0816 00:33:24.439719   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.442551   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.442920   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.442954   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.443051   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.443257   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443434   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443567   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.443740   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.443912   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.443921   78713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:24.550562   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768404.525876526
	
	I0816 00:33:24.550588   78713 fix.go:216] guest clock: 1723768404.525876526
	I0816 00:33:24.550599   78713 fix.go:229] Guest: 2024-08-16 00:33:24.525876526 +0000 UTC Remote: 2024-08-16 00:33:24.439696953 +0000 UTC m=+285.318245053 (delta=86.179573ms)
	I0816 00:33:24.550618   78713 fix.go:200] guest clock delta is within tolerance: 86.179573ms
	I0816 00:33:24.550623   78713 start.go:83] releasing machines lock for "embed-certs-758469", held for 20.128541713s
	I0816 00:33:24.550647   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.551090   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:24.554013   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554358   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.554382   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554572   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555062   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555222   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555279   78713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:24.555330   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.555441   78713 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:24.555463   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.558216   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558368   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558567   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558719   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558723   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558742   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558925   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559074   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559205   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559285   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.559329   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.656926   78713 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:24.662590   78713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:24.811290   78713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:24.817486   78713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:24.817570   78713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:24.838317   78713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:24.838342   78713 start.go:495] detecting cgroup driver to use...
	I0816 00:33:24.838396   78713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:24.856294   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:24.875603   78713 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:24.875650   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:24.890144   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:24.904327   78713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:25.018130   78713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:25.149712   78713 docker.go:233] disabling docker service ...
	I0816 00:33:25.149795   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:25.165494   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:25.179554   78713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:25.330982   78713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:25.476436   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:25.493242   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:25.515688   78713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:25.515762   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.529924   78713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:25.529997   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.541412   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.551836   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.563356   78713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:25.574486   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.585533   78713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.604169   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.615335   78713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:25.629366   78713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:25.629427   78713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:25.645937   78713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:25.657132   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:25.771891   78713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:25.914817   78713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:25.914904   78713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:25.919572   78713 start.go:563] Will wait 60s for crictl version
	I0816 00:33:25.919620   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:33:25.923419   78713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:25.969387   78713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:25.969484   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.002529   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.035709   78713 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:26.036921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:26.039638   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040001   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:26.040023   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040254   78713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:26.044444   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:26.057172   78713 kubeadm.go:883] updating cluster {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:26.057326   78713 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:26.057382   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:26.093950   78713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:26.094031   78713 ssh_runner.go:195] Run: which lz4
	I0816 00:33:26.097998   78713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:26.102152   78713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:26.102183   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:27.538323   78713 crio.go:462] duration metric: took 1.440354469s to copy over tarball
	I0816 00:33:27.538400   78713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:25.885210   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting to get IP...
	I0816 00:33:25.886135   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886555   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:25.886538   80004 retry.go:31] will retry after 214.751664ms: waiting for machine to come up
	I0816 00:33:26.103182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103652   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103677   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.103603   80004 retry.go:31] will retry after 239.667632ms: waiting for machine to come up
	I0816 00:33:26.345223   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345776   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.345701   80004 retry.go:31] will retry after 474.740445ms: waiting for machine to come up
	I0816 00:33:26.822224   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822682   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822716   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.822639   80004 retry.go:31] will retry after 574.324493ms: waiting for machine to come up
	I0816 00:33:27.398433   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398939   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398971   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.398904   80004 retry.go:31] will retry after 567.388033ms: waiting for machine to come up
	I0816 00:33:27.967686   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968225   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.968093   80004 retry.go:31] will retry after 940.450394ms: waiting for machine to come up
	I0816 00:33:28.910549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911058   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911088   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:28.911031   80004 retry.go:31] will retry after 919.494645ms: waiting for machine to come up
	I0816 00:33:29.832687   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833204   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833244   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:29.833189   80004 retry.go:31] will retry after 1.332024716s: waiting for machine to come up
	I0816 00:33:29.677224   78713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.138774475s)
	I0816 00:33:29.677252   78713 crio.go:469] duration metric: took 2.138901242s to extract the tarball
	I0816 00:33:29.677261   78713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:29.716438   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:29.768597   78713 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:29.768622   78713 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:29.768634   78713 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.0 crio true true} ...
	I0816 00:33:29.768787   78713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-758469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:29.768874   78713 ssh_runner.go:195] Run: crio config
	I0816 00:33:29.813584   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:29.813607   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:29.813620   78713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:29.813644   78713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-758469 NodeName:embed-certs-758469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:29.813776   78713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-758469"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:29.813862   78713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:29.825680   78713 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:29.825744   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:29.836314   78713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 00:33:29.853030   78713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:29.869368   78713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 00:33:29.886814   78713 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:29.890644   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:29.903138   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:30.040503   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:30.058323   78713 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469 for IP: 192.168.39.185
	I0816 00:33:30.058351   78713 certs.go:194] generating shared ca certs ...
	I0816 00:33:30.058372   78713 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:30.058559   78713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:30.058624   78713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:30.058638   78713 certs.go:256] generating profile certs ...
	I0816 00:33:30.058778   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/client.key
	I0816 00:33:30.058873   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key.0d0e36ad
	I0816 00:33:30.058930   78713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key
	I0816 00:33:30.059101   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:30.059146   78713 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:30.059162   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:30.059197   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:30.059251   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:30.059285   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:30.059345   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:30.060202   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:30.098381   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:30.135142   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:30.175518   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:30.214349   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 00:33:30.249278   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:30.273772   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:30.298067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:30.324935   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:30.351149   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:30.375636   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:30.399250   78713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:30.417646   78713 ssh_runner.go:195] Run: openssl version
	I0816 00:33:30.423691   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:30.435254   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439651   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439700   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.445673   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:30.456779   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:30.467848   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472199   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472274   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.478109   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:30.489481   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:30.500747   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505116   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505162   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.510739   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:30.521829   78713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:30.526444   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:30.532373   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:30.538402   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:30.544697   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:30.550762   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:30.556573   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:30.562513   78713 kubeadm.go:392] StartCluster: {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:30.562602   78713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:30.562650   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.607119   78713 cri.go:89] found id: ""
	I0816 00:33:30.607197   78713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:30.617798   78713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:30.617818   78713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:30.617873   78713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:30.627988   78713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:30.628976   78713 kubeconfig.go:125] found "embed-certs-758469" server: "https://192.168.39.185:8443"
	I0816 00:33:30.631601   78713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:30.642001   78713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.185
	I0816 00:33:30.642036   78713 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:30.642047   78713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:30.642088   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.685946   78713 cri.go:89] found id: ""
	I0816 00:33:30.686049   78713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:30.704130   78713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:30.714467   78713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:30.714490   78713 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:30.714534   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:33:30.723924   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:30.723985   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:30.733804   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:33:30.743345   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:30.743412   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:30.753604   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.763271   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:30.763340   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.773121   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:33:30.782507   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:30.782565   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:30.792652   78713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:30.802523   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:30.923193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.206424   78713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.283195087s)
	I0816 00:33:32.206449   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.435275   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.509193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.590924   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:32.591020   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.091804   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.591198   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.607568   78713 api_server.go:72] duration metric: took 1.016656713s to wait for apiserver process to appear ...
	I0816 00:33:33.607596   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:33.607619   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:31.166506   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166900   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166927   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:31.166860   80004 retry.go:31] will retry after 1.213971674s: waiting for machine to come up
	I0816 00:33:32.382376   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382862   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382889   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:32.382821   80004 retry.go:31] will retry after 2.115615681s: waiting for machine to come up
	I0816 00:33:34.501236   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501697   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:34.501646   80004 retry.go:31] will retry after 2.495252025s: waiting for machine to come up
	I0816 00:33:36.334341   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.334374   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.334389   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.351971   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.352011   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.608364   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.614582   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:36.614619   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.107654   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.113352   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.113384   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.607902   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.614677   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.614710   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.108329   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.112493   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.112521   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.608061   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.613134   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.613172   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.107667   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.111920   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:39.111954   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.608190   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.613818   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:33:39.619467   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:39.619490   78713 api_server.go:131] duration metric: took 6.011887872s to wait for apiserver health ...
	I0816 00:33:39.619499   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:39.619504   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:39.621572   78713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:36.999158   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999616   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999645   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:36.999576   80004 retry.go:31] will retry after 2.736710806s: waiting for machine to come up
	I0816 00:33:39.737818   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738286   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738320   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:39.738215   80004 retry.go:31] will retry after 3.3205645s: waiting for machine to come up
	I0816 00:33:39.623254   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:39.633910   78713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:39.653736   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:39.663942   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:39.663983   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:39.663994   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:39.664044   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:39.664060   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:39.664067   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:33:39.664078   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:39.664089   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:39.664107   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:33:39.664118   78713 system_pods.go:74] duration metric: took 10.358906ms to wait for pod list to return data ...
	I0816 00:33:39.664127   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:39.667639   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:39.667669   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:39.667682   78713 node_conditions.go:105] duration metric: took 3.547018ms to run NodePressure ...
	I0816 00:33:39.667701   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:39.929620   78713 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934264   78713 kubeadm.go:739] kubelet initialised
	I0816 00:33:39.934289   78713 kubeadm.go:740] duration metric: took 4.64037ms waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934299   78713 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:39.938771   78713 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.943735   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943760   78713 pod_ready.go:82] duration metric: took 4.962601ms for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.943772   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943781   78713 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.947900   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947925   78713 pod_ready.go:82] duration metric: took 4.129605ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.947936   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947943   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.953367   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953400   78713 pod_ready.go:82] duration metric: took 5.445682ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.953412   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953422   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.057510   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057533   78713 pod_ready.go:82] duration metric: took 104.099944ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.057543   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057548   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.458355   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458389   78713 pod_ready.go:82] duration metric: took 400.832009ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.458400   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458408   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.857939   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857964   78713 pod_ready.go:82] duration metric: took 399.549123ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.857974   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857980   78713 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:41.257101   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257126   78713 pod_ready.go:82] duration metric: took 399.13078ms for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:41.257135   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257142   78713 pod_ready.go:39] duration metric: took 1.322827054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:41.257159   78713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:33:41.269076   78713 ops.go:34] apiserver oom_adj: -16
	I0816 00:33:41.269098   78713 kubeadm.go:597] duration metric: took 10.651273415s to restartPrimaryControlPlane
	I0816 00:33:41.269107   78713 kubeadm.go:394] duration metric: took 10.706599955s to StartCluster
	I0816 00:33:41.269127   78713 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.269191   78713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:33:41.271380   78713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.271679   78713 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:33:41.271714   78713 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:33:41.271812   78713 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-758469"
	I0816 00:33:41.271834   78713 addons.go:69] Setting default-storageclass=true in profile "embed-certs-758469"
	I0816 00:33:41.271845   78713 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-758469"
	W0816 00:33:41.271858   78713 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:33:41.271874   78713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-758469"
	I0816 00:33:41.271882   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:41.271891   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.271860   78713 addons.go:69] Setting metrics-server=true in profile "embed-certs-758469"
	I0816 00:33:41.271934   78713 addons.go:234] Setting addon metrics-server=true in "embed-certs-758469"
	W0816 00:33:41.271952   78713 addons.go:243] addon metrics-server should already be in state true
	I0816 00:33:41.272022   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.272324   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272575   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272604   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272704   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272718   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272745   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.274599   78713 out.go:177] * Verifying Kubernetes components...
	I0816 00:33:41.276283   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:41.292526   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0816 00:33:41.292560   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0816 00:33:41.292556   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I0816 00:33:41.293000   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293053   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293004   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293482   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293499   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293592   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293606   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293625   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293607   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293891   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293939   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293976   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.294132   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.294475   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294483   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294517   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.294522   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.297714   78713 addons.go:234] Setting addon default-storageclass=true in "embed-certs-758469"
	W0816 00:33:41.297747   78713 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:33:41.297787   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.298192   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.298238   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.310002   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0816 00:33:41.310000   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0816 00:33:41.310469   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310521   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310899   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.310917   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311027   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.311048   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311293   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311476   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.311491   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311642   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.313614   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.313697   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.315474   78713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:33:41.315484   78713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:33:41.316719   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0816 00:33:41.316887   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:33:41.316902   78713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:33:41.316921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.316975   78713 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.316985   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:33:41.316995   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.317061   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.317572   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.317594   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.317941   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.318669   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.318702   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.320288   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320668   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.320695   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320726   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320939   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321241   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.321267   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.321402   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321497   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.321547   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321592   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.321883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.322021   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.334230   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0816 00:33:41.334580   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.335088   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.335107   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.335387   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.335549   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.336891   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.337084   78713 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.337100   78713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:33:41.337115   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.340204   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340667   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.340697   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340837   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.340987   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.341120   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.341277   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.476131   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:41.502242   78713 node_ready.go:35] waiting up to 6m0s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:41.559562   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.575913   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:33:41.575937   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:33:41.614763   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:33:41.614784   78713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:33:41.628658   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.670367   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:41.670393   78713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:33:41.746638   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:42.849125   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.22043382s)
	I0816 00:33:42.849189   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849202   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849397   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.289807606s)
	I0816 00:33:42.849438   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849448   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849478   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849514   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849524   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849538   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849550   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849761   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849803   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849813   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849825   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849833   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.850018   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850033   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.850059   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850059   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.850078   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856398   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.856419   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.856647   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.856667   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856676   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901261   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1545817s)
	I0816 00:33:42.901314   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901329   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901619   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901680   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901694   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901704   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901713   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901953   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901973   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901986   78713 addons.go:475] Verifying addon metrics-server=true in "embed-certs-758469"
	I0816 00:33:42.904677   78713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 00:33:42.905802   78713 addons.go:510] duration metric: took 1.634089536s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 00:33:43.506584   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:44.254575   79191 start.go:364] duration metric: took 3m52.362627542s to acquireMachinesLock for "old-k8s-version-098619"
	I0816 00:33:44.254648   79191 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:44.254659   79191 fix.go:54] fixHost starting: 
	I0816 00:33:44.255099   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:44.255137   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:44.271236   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0816 00:33:44.271591   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:44.272030   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:33:44.272052   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:44.272328   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:44.272503   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:33:44.272660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetState
	I0816 00:33:44.274235   79191 fix.go:112] recreateIfNeeded on old-k8s-version-098619: state=Stopped err=<nil>
	I0816 00:33:44.274272   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	W0816 00:33:44.274415   79191 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:44.275978   79191 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-098619" ...
	I0816 00:33:43.059949   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060413   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Found IP for machine: 192.168.50.128
	I0816 00:33:43.060440   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserving static IP address...
	I0816 00:33:43.060479   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has current primary IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060881   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.060906   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | skip adding static IP to network mk-default-k8s-diff-port-616827 - found existing host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"}
	I0816 00:33:43.060921   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserved static IP address: 192.168.50.128
	I0816 00:33:43.060937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for SSH to be available...
	I0816 00:33:43.060952   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Getting to WaitForSSH function...
	I0816 00:33:43.063249   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063552   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.063592   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH client type: external
	I0816 00:33:43.063833   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa (-rw-------)
	I0816 00:33:43.063877   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:43.063896   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | About to run SSH command:
	I0816 00:33:43.063905   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | exit 0
	I0816 00:33:43.185986   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:43.186338   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetConfigRaw
	I0816 00:33:43.186944   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.189324   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189617   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.189643   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189890   78747 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/config.json ...
	I0816 00:33:43.190166   78747 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:43.190192   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:43.190401   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.192515   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192836   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.192865   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192940   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.193118   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193280   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193454   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.193614   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.193812   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.193825   78747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:43.290143   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:43.290168   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290395   78747 buildroot.go:166] provisioning hostname "default-k8s-diff-port-616827"
	I0816 00:33:43.290422   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290603   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.293231   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.293665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293829   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.294038   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294195   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294325   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.294479   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.294685   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.294703   78747 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-616827 && echo "default-k8s-diff-port-616827" | sudo tee /etc/hostname
	I0816 00:33:43.406631   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-616827
	
	I0816 00:33:43.406655   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.409271   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409610   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.409641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409794   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.409984   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410160   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.410491   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.410670   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.410695   78747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-616827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-616827/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-616827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:43.515766   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:43.515796   78747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:43.515829   78747 buildroot.go:174] setting up certificates
	I0816 00:33:43.515841   78747 provision.go:84] configureAuth start
	I0816 00:33:43.515850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.516128   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.518730   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519055   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.519087   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.521186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.521538   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521691   78747 provision.go:143] copyHostCerts
	I0816 00:33:43.521746   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:43.521764   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:43.521822   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:43.521949   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:43.521959   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:43.521982   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:43.522050   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:43.522057   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:43.522074   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:43.522132   78747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-616827 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-616827 localhost minikube]
	I0816 00:33:43.601126   78747 provision.go:177] copyRemoteCerts
	I0816 00:33:43.601179   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:43.601203   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.603816   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604148   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.604180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.604549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.604725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.604863   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:43.686829   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:43.712297   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 00:33:43.738057   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:43.762820   78747 provision.go:87] duration metric: took 246.967064ms to configureAuth
	I0816 00:33:43.762853   78747 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:43.763069   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:43.763155   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.765886   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766256   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.766287   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766447   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.766641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766813   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.767164   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.767318   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.767334   78747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:44.025337   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:44.025373   78747 machine.go:96] duration metric: took 835.190539ms to provisionDockerMachine
	I0816 00:33:44.025387   78747 start.go:293] postStartSetup for "default-k8s-diff-port-616827" (driver="kvm2")
	I0816 00:33:44.025401   78747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:44.025416   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.025780   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:44.025804   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.028307   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028591   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.028618   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028740   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.028925   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.029117   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.029281   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.109481   78747 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:44.115290   78747 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:44.115317   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:44.115388   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:44.115482   78747 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:44.115597   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:44.128677   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:44.154643   78747 start.go:296] duration metric: took 129.242138ms for postStartSetup
	I0816 00:33:44.154685   78747 fix.go:56] duration metric: took 19.603921801s for fixHost
	I0816 00:33:44.154705   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.157477   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.157907   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.157937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.158051   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.158264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158411   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158580   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.158757   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:44.158981   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:44.158996   78747 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:44.254419   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768424.226223949
	
	I0816 00:33:44.254443   78747 fix.go:216] guest clock: 1723768424.226223949
	I0816 00:33:44.254452   78747 fix.go:229] Guest: 2024-08-16 00:33:44.226223949 +0000 UTC Remote: 2024-08-16 00:33:44.154688835 +0000 UTC m=+304.265683075 (delta=71.535114ms)
	I0816 00:33:44.254476   78747 fix.go:200] guest clock delta is within tolerance: 71.535114ms
	I0816 00:33:44.254482   78747 start.go:83] releasing machines lock for "default-k8s-diff-port-616827", held for 19.703745588s
	I0816 00:33:44.254504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.254750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:44.257516   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.257879   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.257910   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.258111   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258828   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258908   78747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:44.258946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.259033   78747 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:44.259048   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.261566   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261814   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261978   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262008   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262112   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262145   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262254   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262390   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262442   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262502   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.262549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262642   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.346934   78747 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:44.370413   78747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:44.519130   78747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:44.525276   78747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:44.525344   78747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:44.549125   78747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:44.549154   78747 start.go:495] detecting cgroup driver to use...
	I0816 00:33:44.549227   78747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:44.575221   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:44.592214   78747 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:44.592270   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:44.607403   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:44.629127   78747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:44.786185   78747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:44.954426   78747 docker.go:233] disabling docker service ...
	I0816 00:33:44.954495   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:44.975169   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:44.994113   78747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:45.142572   78747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:45.297255   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:45.313401   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:45.334780   78747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:45.334851   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.346039   78747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:45.346111   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.357681   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.368607   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.381164   78747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:45.394060   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.406010   78747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.424720   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.437372   78747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:45.450515   78747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:45.450595   78747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:45.465740   78747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:45.476568   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:45.629000   78747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:45.781044   78747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:45.781142   78747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:45.787480   78747 start.go:563] Will wait 60s for crictl version
	I0816 00:33:45.787551   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:33:45.791907   78747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:45.836939   78747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:45.837025   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.869365   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.907162   78747 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:44.277288   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .Start
	I0816 00:33:44.277426   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring networks are active...
	I0816 00:33:44.278141   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network default is active
	I0816 00:33:44.278471   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network mk-old-k8s-version-098619 is active
	I0816 00:33:44.278820   79191 main.go:141] libmachine: (old-k8s-version-098619) Getting domain xml...
	I0816 00:33:44.279523   79191 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:33:45.643704   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting to get IP...
	I0816 00:33:45.644691   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.645213   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.645247   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.645162   80212 retry.go:31] will retry after 198.057532ms: waiting for machine to come up
	I0816 00:33:45.844756   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.845297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.845321   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.845247   80212 retry.go:31] will retry after 288.630433ms: waiting for machine to come up
	I0816 00:33:46.135913   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.136413   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.136442   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.136365   80212 retry.go:31] will retry after 456.48021ms: waiting for machine to come up
	I0816 00:33:46.594170   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.594649   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.594678   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.594592   80212 retry.go:31] will retry after 501.49137ms: waiting for machine to come up
	I0816 00:33:46.006040   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:47.007144   78713 node_ready.go:49] node "embed-certs-758469" has status "Ready":"True"
	I0816 00:33:47.007172   78713 node_ready.go:38] duration metric: took 5.504897396s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:47.007183   78713 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:47.014800   78713 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:49.022567   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:45.908518   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:45.912248   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.912762   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:45.912797   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.913115   78747 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:45.917917   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:45.935113   78747 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:45.935294   78747 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:45.935351   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:45.988031   78747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:45.988115   78747 ssh_runner.go:195] Run: which lz4
	I0816 00:33:45.992508   78747 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:45.997108   78747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:45.997199   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:47.459404   78747 crio.go:462] duration metric: took 1.466928999s to copy over tarball
	I0816 00:33:47.459478   78747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:49.621449   78747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16194292s)
	I0816 00:33:49.621484   78747 crio.go:469] duration metric: took 2.162054092s to extract the tarball
	I0816 00:33:49.621494   78747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:49.660378   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:49.709446   78747 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:49.709471   78747 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:49.709481   78747 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.0 crio true true} ...
	I0816 00:33:49.709609   78747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-616827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:49.709704   78747 ssh_runner.go:195] Run: crio config
	I0816 00:33:49.756470   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:49.756497   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:49.756510   78747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:49.756534   78747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-616827 NodeName:default-k8s-diff-port-616827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:49.756745   78747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-616827"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:49.756827   78747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:49.766769   78747 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:49.766840   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:49.776367   78747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 00:33:49.793191   78747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:49.811993   78747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 00:33:49.829787   78747 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:49.833673   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:49.846246   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:47.098130   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.098614   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.098645   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.098569   80212 retry.go:31] will retry after 663.568587ms: waiting for machine to come up
	I0816 00:33:47.763930   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.764447   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.764470   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.764376   80212 retry.go:31] will retry after 679.581678ms: waiting for machine to come up
	I0816 00:33:48.446082   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:48.446552   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:48.446579   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:48.446498   80212 retry.go:31] will retry after 1.090430732s: waiting for machine to come up
	I0816 00:33:49.538961   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:49.539454   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:49.539482   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:49.539397   80212 retry.go:31] will retry after 1.039148258s: waiting for machine to come up
	I0816 00:33:50.579642   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:50.580119   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:50.580144   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:50.580074   80212 retry.go:31] will retry after 1.440992413s: waiting for machine to come up
	I0816 00:33:51.788858   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:54.022577   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:49.963020   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:49.980142   78747 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827 for IP: 192.168.50.128
	I0816 00:33:49.980170   78747 certs.go:194] generating shared ca certs ...
	I0816 00:33:49.980192   78747 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:49.980408   78747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:49.980470   78747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:49.980489   78747 certs.go:256] generating profile certs ...
	I0816 00:33:49.980583   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/client.key
	I0816 00:33:49.980669   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key.2062a467
	I0816 00:33:49.980737   78747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key
	I0816 00:33:49.980891   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:49.980940   78747 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:49.980949   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:49.980984   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:49.981021   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:49.981050   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:49.981102   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:49.981835   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:50.014530   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:50.057377   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:50.085730   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:50.121721   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 00:33:50.166448   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:50.195059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:50.220059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:50.244288   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:50.268463   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:50.293203   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:50.318859   78747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:50.336625   78747 ssh_runner.go:195] Run: openssl version
	I0816 00:33:50.343301   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:50.355408   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360245   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360312   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.366435   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:50.377753   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:50.389482   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394337   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394419   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.400279   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:50.412410   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:50.424279   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429013   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429077   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.435095   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:50.448148   78747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:50.453251   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:50.459730   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:50.466145   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:50.472438   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:50.478701   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:50.485081   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:50.490958   78747 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:50.491091   78747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:50.491173   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.545458   78747 cri.go:89] found id: ""
	I0816 00:33:50.545532   78747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:50.557054   78747 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:50.557074   78747 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:50.557122   78747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:50.570313   78747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:50.571774   78747 kubeconfig.go:125] found "default-k8s-diff-port-616827" server: "https://192.168.50.128:8444"
	I0816 00:33:50.574969   78747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:50.586066   78747 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I0816 00:33:50.586101   78747 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:50.586114   78747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:50.586172   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.631347   78747 cri.go:89] found id: ""
	I0816 00:33:50.631416   78747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:50.651296   78747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:50.665358   78747 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:50.665387   78747 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:50.665427   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 00:33:50.678634   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:50.678706   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:50.690376   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 00:33:50.702070   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:50.702132   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:50.714117   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.725349   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:50.725413   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.735691   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 00:33:50.745524   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:50.745598   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:50.756310   78747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:50.771825   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:50.908593   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.046812   78747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138178717s)
	I0816 00:33:52.046863   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.282111   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.357877   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.485435   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:52.485531   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:52.985717   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.486461   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.522663   78747 api_server.go:72] duration metric: took 1.037234176s to wait for apiserver process to appear ...
	I0816 00:33:53.522692   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:53.522713   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:52.022573   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:52.023319   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:52.023352   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:52.023226   80212 retry.go:31] will retry after 1.814668747s: waiting for machine to come up
	I0816 00:33:53.839539   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:53.839916   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:53.839944   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:53.839861   80212 retry.go:31] will retry after 1.900379439s: waiting for machine to come up
	I0816 00:33:55.742480   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:55.742981   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:55.743004   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:55.742920   80212 retry.go:31] will retry after 2.798728298s: waiting for machine to come up
	I0816 00:33:56.782681   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.782714   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:56.782730   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:56.828595   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.828628   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:57.022870   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.028291   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.028326   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:57.522858   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.533079   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.533120   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.023304   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.029913   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:58.029948   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.523517   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.529934   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:33:58.536872   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:58.536898   78747 api_server.go:131] duration metric: took 5.014199256s to wait for apiserver health ...
	I0816 00:33:58.536907   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:58.536916   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:58.539004   78747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:54.522157   78713 pod_ready.go:93] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.522186   78713 pod_ready.go:82] duration metric: took 7.507358513s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.522201   78713 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529305   78713 pod_ready.go:93] pod "etcd-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.529323   78713 pod_ready.go:82] duration metric: took 7.114484ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529331   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536656   78713 pod_ready.go:93] pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.536688   78713 pod_ready.go:82] duration metric: took 7.349231ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536701   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542615   78713 pod_ready.go:93] pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.542637   78713 pod_ready.go:82] duration metric: took 5.927403ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542650   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548165   78713 pod_ready.go:93] pod "kube-proxy-4xc89" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.548188   78713 pod_ready.go:82] duration metric: took 5.530073ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548200   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919561   78713 pod_ready.go:93] pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.919586   78713 pod_ready.go:82] duration metric: took 371.377774ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919598   78713 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:56.925892   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.926811   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.540592   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:58.554493   78747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:58.594341   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:58.605247   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:58.605293   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:58.605304   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:58.605314   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:58.605329   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:58.605342   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:33:58.605351   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:58.605358   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:58.605363   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:33:58.605372   78747 system_pods.go:74] duration metric: took 11.009517ms to wait for pod list to return data ...
	I0816 00:33:58.605384   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:58.609964   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:58.609996   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:58.610007   78747 node_conditions.go:105] duration metric: took 4.615471ms to run NodePressure ...
	I0816 00:33:58.610025   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:58.930292   78747 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937469   78747 kubeadm.go:739] kubelet initialised
	I0816 00:33:58.937499   78747 kubeadm.go:740] duration metric: took 7.181814ms waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937509   78747 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:59.036968   78747 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.046554   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046589   78747 pod_ready.go:82] duration metric: took 9.589918ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.046601   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046618   78747 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.053621   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053654   78747 pod_ready.go:82] duration metric: took 7.022323ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.053669   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053678   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.065329   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065357   78747 pod_ready.go:82] duration metric: took 11.650757ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.065378   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065387   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.074595   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074627   78747 pod_ready.go:82] duration metric: took 9.230183ms for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.074643   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074657   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.399077   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399105   78747 pod_ready.go:82] duration metric: took 324.440722ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.399116   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399124   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.797130   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797158   78747 pod_ready.go:82] duration metric: took 398.024149ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.797169   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797176   78747 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:00.197929   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197961   78747 pod_ready.go:82] duration metric: took 400.777243ms for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:34:00.197976   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197992   78747 pod_ready.go:39] duration metric: took 1.260464876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:00.198024   78747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:34:00.210255   78747 ops.go:34] apiserver oom_adj: -16
	I0816 00:34:00.210278   78747 kubeadm.go:597] duration metric: took 9.653197586s to restartPrimaryControlPlane
	I0816 00:34:00.210302   78747 kubeadm.go:394] duration metric: took 9.719364617s to StartCluster
	I0816 00:34:00.210322   78747 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.210405   78747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:00.212730   78747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.213053   78747 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:34:00.213162   78747 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:34:00.213247   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:00.213277   78747 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213292   78747 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213305   78747 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213313   78747 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:34:00.213344   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213352   78747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-616827"
	I0816 00:34:00.213298   78747 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213413   78747 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213435   78747 addons.go:243] addon metrics-server should already be in state true
	I0816 00:34:00.213463   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213751   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213795   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213752   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213886   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213756   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213992   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.215058   78747 out.go:177] * Verifying Kubernetes components...
	I0816 00:34:00.216719   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:00.229428   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0816 00:34:00.229676   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0816 00:34:00.229881   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230164   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230522   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230538   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230689   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230727   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230850   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.231488   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.231512   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.231754   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.232394   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.232426   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.232909   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0816 00:34:00.233400   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.233959   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.233979   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.234368   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.234576   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.238180   78747 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.238203   78747 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:34:00.238230   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.238598   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.238642   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.249682   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0816 00:34:00.250163   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.250894   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.250919   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.251326   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0816 00:34:00.251324   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.251663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.251828   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.252294   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.252318   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.252863   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.253070   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.253746   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.254958   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.255056   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0816 00:34:00.255513   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.256043   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.256083   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.256121   78747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:00.256494   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.257255   78747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:34:00.257377   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.257422   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.259132   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:34:00.259154   78747 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:34:00.259176   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.259204   78747 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.259223   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:34:00.259241   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.263096   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263213   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263688   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263874   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263996   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264175   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264441   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.264511   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264695   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.274557   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0816 00:34:00.274984   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.275444   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.275463   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.275735   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.275946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.277509   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.277745   78747 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.277762   78747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:34:00.277782   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.280264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280660   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.280689   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280790   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.280982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.281140   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.281286   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.445986   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:00.465112   78747 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:00.568927   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.602693   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.620335   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:34:00.620355   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:34:00.667790   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:34:00.667810   78747 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:34:00.698510   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.698536   78747 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:34:00.723319   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.975635   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.975663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976006   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976007   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976030   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.976044   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.976075   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976347   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976340   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976376   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.983280   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.983304   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.983587   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.983586   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.983620   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.678707   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678733   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.678889   78747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.076166351s)
	I0816 00:34:01.678936   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678955   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679115   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679136   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679145   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679153   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679473   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679497   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679484   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679514   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679521   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679525   78747 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-616827"
	I0816 00:34:01.679528   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679537   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679544   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679821   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679862   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679887   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.683006   78747 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 00:33:58.543282   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:58.543753   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:58.543783   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:58.543689   80212 retry.go:31] will retry after 4.402812235s: waiting for machine to come up
	I0816 00:34:00.927244   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:03.428032   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:04.178649   78489 start.go:364] duration metric: took 54.753990439s to acquireMachinesLock for "no-preload-819398"
	I0816 00:34:04.178706   78489 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:34:04.178714   78489 fix.go:54] fixHost starting: 
	I0816 00:34:04.179124   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:04.179162   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:04.195783   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0816 00:34:04.196138   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:04.196590   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:34:04.196614   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:04.196962   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:04.197161   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:04.197303   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:34:04.198795   78489 fix.go:112] recreateIfNeeded on no-preload-819398: state=Stopped err=<nil>
	I0816 00:34:04.198814   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	W0816 00:34:04.198978   78489 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:34:04.200736   78489 out.go:177] * Restarting existing kvm2 VM for "no-preload-819398" ...
	I0816 00:34:01.684641   78747 addons.go:510] duration metric: took 1.471480873s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 00:34:02.473603   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:04.476035   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:02.951078   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951631   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has current primary IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951672   79191 main.go:141] libmachine: (old-k8s-version-098619) Found IP for machine: 192.168.72.137
	I0816 00:34:02.951687   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserving static IP address...
	I0816 00:34:02.952154   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserved static IP address: 192.168.72.137
	I0816 00:34:02.952186   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.952201   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting for SSH to be available...
	I0816 00:34:02.952224   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | skip adding static IP to network mk-old-k8s-version-098619 - found existing host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"}
	I0816 00:34:02.952236   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Getting to WaitForSSH function...
	I0816 00:34:02.954361   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954686   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.954715   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954791   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH client type: external
	I0816 00:34:02.954830   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa (-rw-------)
	I0816 00:34:02.954871   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:02.954890   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | About to run SSH command:
	I0816 00:34:02.954909   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | exit 0
	I0816 00:34:03.078035   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:03.078408   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:34:03.079002   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.081041   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081391   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.081489   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081566   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:34:03.081748   79191 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:03.081767   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.082007   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.084022   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084333   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.084357   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084499   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.084700   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.084867   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.085074   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.085266   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.085509   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.085525   79191 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:03.186066   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:03.186094   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186368   79191 buildroot.go:166] provisioning hostname "old-k8s-version-098619"
	I0816 00:34:03.186397   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186597   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.189330   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189658   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.189702   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189792   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.190004   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190185   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190344   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.190481   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.190665   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.190688   79191 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098619 && echo "old-k8s-version-098619" | sudo tee /etc/hostname
	I0816 00:34:03.304585   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098619
	
	I0816 00:34:03.304608   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.307415   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307732   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.307763   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307955   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.308155   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308314   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308474   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.308629   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.308795   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.308811   79191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098619/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:03.418968   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:03.419010   79191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:03.419045   79191 buildroot.go:174] setting up certificates
	I0816 00:34:03.419058   79191 provision.go:84] configureAuth start
	I0816 00:34:03.419072   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.419338   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.421799   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422159   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.422198   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422401   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.425023   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425417   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.425445   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425557   79191 provision.go:143] copyHostCerts
	I0816 00:34:03.425624   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:03.425646   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:03.425717   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:03.425875   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:03.425888   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:03.425921   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:03.426007   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:03.426017   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:03.426045   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:03.426112   79191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098619 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-098619]
	I0816 00:34:03.509869   79191 provision.go:177] copyRemoteCerts
	I0816 00:34:03.509932   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:03.509961   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.512603   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.512938   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.512984   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.513163   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.513451   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.513617   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.513777   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:03.596330   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 00:34:03.621969   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:03.646778   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:03.671937   79191 provision.go:87] duration metric: took 252.867793ms to configureAuth
	I0816 00:34:03.671964   79191 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:03.672149   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:34:03.672250   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.675207   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675600   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.675625   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675787   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.676006   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676199   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676360   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.676549   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.676762   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.676779   79191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:03.945259   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:03.945287   79191 machine.go:96] duration metric: took 863.526642ms to provisionDockerMachine
	I0816 00:34:03.945298   79191 start.go:293] postStartSetup for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:34:03.945308   79191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:03.945335   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.945638   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:03.945666   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.948590   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.948967   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.948989   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.949152   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.949350   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.949491   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.949645   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.028994   79191 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:04.033776   79191 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:04.033799   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:04.033872   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:04.033943   79191 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:04.034033   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:04.045492   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:04.071879   79191 start.go:296] duration metric: took 126.569157ms for postStartSetup
	I0816 00:34:04.071920   79191 fix.go:56] duration metric: took 19.817260263s for fixHost
	I0816 00:34:04.071944   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.074942   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.075325   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075504   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.075699   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075846   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075977   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.076146   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:04.076319   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:04.076332   79191 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:04.178483   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768444.133390375
	
	I0816 00:34:04.178510   79191 fix.go:216] guest clock: 1723768444.133390375
	I0816 00:34:04.178519   79191 fix.go:229] Guest: 2024-08-16 00:34:04.133390375 +0000 UTC Remote: 2024-08-16 00:34:04.071925107 +0000 UTC m=+252.320651106 (delta=61.465268ms)
	I0816 00:34:04.178537   79191 fix.go:200] guest clock delta is within tolerance: 61.465268ms
	I0816 00:34:04.178541   79191 start.go:83] releasing machines lock for "old-k8s-version-098619", held for 19.923923778s
	I0816 00:34:04.178567   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.178875   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:04.181999   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182458   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.182490   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183192   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183357   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183412   79191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:04.183461   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.183553   79191 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:04.183575   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.186192   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186418   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186507   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186531   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186679   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.186811   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186836   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186850   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187016   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187032   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.187211   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187215   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.187364   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187488   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.283880   79191 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:04.289798   79191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:04.436822   79191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:04.443547   79191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:04.443631   79191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:04.464783   79191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:04.464807   79191 start.go:495] detecting cgroup driver to use...
	I0816 00:34:04.464873   79191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:04.481504   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:04.501871   79191 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:04.501942   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:04.521898   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:04.538186   79191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:04.704361   79191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:04.881682   79191 docker.go:233] disabling docker service ...
	I0816 00:34:04.881757   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:04.900264   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:04.916152   79191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:05.048440   79191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:05.166183   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:05.181888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:05.202525   79191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:34:05.202592   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.214655   79191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:05.214712   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.226052   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.236878   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.249217   79191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:05.260362   79191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:05.271039   79191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:05.271108   79191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:05.290423   79191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:05.307175   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:05.465815   79191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:05.640787   79191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:05.640878   79191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:05.646821   79191 start.go:563] Will wait 60s for crictl version
	I0816 00:34:05.646883   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:05.651455   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:05.698946   79191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:05.699037   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.729185   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.772063   79191 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:34:05.773406   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:05.776689   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777177   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:05.777241   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777435   79191 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:05.782377   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:05.797691   79191 kubeadm.go:883] updating cluster {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:05.797872   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:34:05.797953   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:05.861468   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:05.861557   79191 ssh_runner.go:195] Run: which lz4
	I0816 00:34:05.866880   79191 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:34:05.872036   79191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:34:05.872071   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:34:04.202120   78489 main.go:141] libmachine: (no-preload-819398) Calling .Start
	I0816 00:34:04.202293   78489 main.go:141] libmachine: (no-preload-819398) Ensuring networks are active...
	I0816 00:34:04.203062   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network default is active
	I0816 00:34:04.203345   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network mk-no-preload-819398 is active
	I0816 00:34:04.205286   78489 main.go:141] libmachine: (no-preload-819398) Getting domain xml...
	I0816 00:34:04.206025   78489 main.go:141] libmachine: (no-preload-819398) Creating domain...
	I0816 00:34:05.553661   78489 main.go:141] libmachine: (no-preload-819398) Waiting to get IP...
	I0816 00:34:05.554629   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.555210   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.555309   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.555211   80407 retry.go:31] will retry after 298.759084ms: waiting for machine to come up
	I0816 00:34:05.856046   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.856571   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.856604   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.856530   80407 retry.go:31] will retry after 293.278331ms: waiting for machine to come up
	I0816 00:34:06.151110   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.151542   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.151571   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.151498   80407 retry.go:31] will retry after 332.472371ms: waiting for machine to come up
	I0816 00:34:06.485927   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.486487   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.486514   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.486459   80407 retry.go:31] will retry after 600.720276ms: waiting for machine to come up
	I0816 00:34:05.926954   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.929140   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:06.972334   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:07.469652   78747 node_ready.go:49] node "default-k8s-diff-port-616827" has status "Ready":"True"
	I0816 00:34:07.469684   78747 node_ready.go:38] duration metric: took 7.004536271s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:07.469700   78747 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:07.476054   78747 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482839   78747 pod_ready.go:93] pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.482861   78747 pod_ready.go:82] duration metric: took 6.779315ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482871   78747 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489325   78747 pod_ready.go:93] pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.489348   78747 pod_ready.go:82] duration metric: took 6.470629ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489357   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495536   78747 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.495555   78747 pod_ready.go:82] duration metric: took 6.192295ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495565   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:09.503258   78747 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.631328   79191 crio.go:462] duration metric: took 1.76448771s to copy over tarball
	I0816 00:34:07.631413   79191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:34:10.662435   79191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.030990355s)
	I0816 00:34:10.662472   79191 crio.go:469] duration metric: took 3.031115615s to extract the tarball
	I0816 00:34:10.662482   79191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:34:10.707627   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:10.745704   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:10.745742   79191 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.745838   79191 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.745914   79191 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.745860   79191 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.745943   79191 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.745884   79191 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.746059   79191 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747781   79191 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.747803   79191 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.747808   79191 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.747824   79191 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.747842   79191 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.747883   79191 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.747895   79191 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747948   79191 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.916488   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.923947   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.931668   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.942764   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.948555   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.957593   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.970039   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:34:11.012673   79191 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:34:11.012707   79191 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.012778   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.026267   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:11.135366   79191 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:34:11.135398   79191 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.135451   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.149180   79191 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:34:11.149226   79191 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.149271   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183480   79191 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:34:11.183526   79191 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.183526   79191 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:34:11.183578   79191 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.183584   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183637   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186513   79191 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:34:11.186559   79191 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.186622   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186632   79191 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:34:11.186658   79191 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:34:11.186699   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186722   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.252857   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.252914   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.252935   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.253007   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.253012   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.253083   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.253140   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420527   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.420559   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.420564   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.420638   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420732   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.420791   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.420813   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591141   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.591197   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.591267   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.591337   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.591418   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591453   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.591505   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:34:11.721234   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:34:11.725967   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:34:11.731189   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:34:11.731276   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:34:11.742195   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:34:11.742224   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:34:11.742265   79191 cache_images.go:92] duration metric: took 996.507737ms to LoadCachedImages
	W0816 00:34:11.742327   79191 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0816 00:34:11.742342   79191 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0816 00:34:11.742464   79191 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-098619 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:11.742546   79191 ssh_runner.go:195] Run: crio config
	I0816 00:34:07.089462   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.090073   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.090099   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.089985   80407 retry.go:31] will retry after 666.260439ms: waiting for machine to come up
	I0816 00:34:07.757621   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.758156   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.758182   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.758105   80407 retry.go:31] will retry after 782.571604ms: waiting for machine to come up
	I0816 00:34:08.542021   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:08.542426   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:08.542475   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:08.542381   80407 retry.go:31] will retry after 840.347921ms: waiting for machine to come up
	I0816 00:34:09.384399   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:09.384866   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:09.384893   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:09.384824   80407 retry.go:31] will retry after 1.376690861s: waiting for machine to come up
	I0816 00:34:10.763158   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:10.763547   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:10.763573   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:10.763484   80407 retry.go:31] will retry after 1.237664711s: waiting for machine to come up
	I0816 00:34:10.426656   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:12.429312   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.354758   78747 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.354783   78747 pod_ready.go:82] duration metric: took 3.859210458s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.354796   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363323   78747 pod_ready.go:93] pod "kube-proxy-f99ds" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.363347   78747 pod_ready.go:82] duration metric: took 8.543406ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363359   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369799   78747 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.369826   78747 pod_ready.go:82] duration metric: took 6.458192ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369858   78747 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:13.376479   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.791749   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:34:11.791779   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:11.791791   79191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:11.791810   79191 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098619 NodeName:old-k8s-version-098619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:34:11.791969   79191 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-098619"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:11.792046   79191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:34:11.802572   79191 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:11.802649   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:11.812583   79191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 00:34:11.831551   79191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:11.852476   79191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 00:34:11.875116   79191 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:11.879833   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:11.893308   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:12.038989   79191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:12.061736   79191 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619 for IP: 192.168.72.137
	I0816 00:34:12.061761   79191 certs.go:194] generating shared ca certs ...
	I0816 00:34:12.061780   79191 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.061992   79191 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:12.062046   79191 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:12.062059   79191 certs.go:256] generating profile certs ...
	I0816 00:34:12.062193   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key
	I0816 00:34:12.062283   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4
	I0816 00:34:12.062343   79191 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key
	I0816 00:34:12.062485   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:12.062523   79191 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:12.062536   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:12.062579   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:12.062614   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:12.062658   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:12.062721   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:12.063630   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:12.106539   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:12.139393   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:12.171548   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:12.213113   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 00:34:12.244334   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:34:12.287340   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:12.331047   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:34:12.369666   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:12.397260   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:12.424009   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:12.450212   79191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:12.471550   79191 ssh_runner.go:195] Run: openssl version
	I0816 00:34:12.479821   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:12.494855   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500546   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500620   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.508817   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:12.521689   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:12.533904   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538789   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538946   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.546762   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:12.561940   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:12.575852   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582377   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582457   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.590772   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:12.604976   79191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:12.610332   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:12.617070   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:12.625769   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:12.634342   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:12.641486   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:12.650090   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:12.658206   79191 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:12.658306   79191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:12.658392   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.703323   79191 cri.go:89] found id: ""
	I0816 00:34:12.703399   79191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:12.714950   79191 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:12.714970   79191 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:12.715047   79191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:12.727051   79191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:12.728059   79191 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:12.728655   79191 kubeconfig.go:62] /home/jenkins/minikube-integration/19452-12919/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098619" cluster setting kubeconfig missing "old-k8s-version-098619" context setting]
	I0816 00:34:12.729552   79191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.731269   79191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:12.744732   79191 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0816 00:34:12.744766   79191 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:12.744777   79191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:12.744833   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.783356   79191 cri.go:89] found id: ""
	I0816 00:34:12.783432   79191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:12.801942   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:12.816412   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:12.816433   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:12.816480   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:12.827686   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:12.827757   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:12.838063   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:12.847714   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:12.847808   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:12.858274   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.869328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:12.869389   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.881457   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:12.892256   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:12.892325   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:12.902115   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:12.912484   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.040145   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.851639   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.085396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.208430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.321003   79191 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:14.321084   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:14.822130   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.321780   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.822121   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:16.322077   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:12.002977   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:12.003441   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:12.003470   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:12.003401   80407 retry.go:31] will retry after 1.413320186s: waiting for machine to come up
	I0816 00:34:13.418972   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:13.419346   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:13.419374   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:13.419284   80407 retry.go:31] will retry after 2.055525842s: waiting for machine to come up
	I0816 00:34:15.476550   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:15.477044   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:15.477072   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:15.477021   80407 retry.go:31] will retry after 2.728500649s: waiting for machine to come up
	I0816 00:34:14.926133   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.930322   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:15.377291   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:17.877627   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.821714   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.321166   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.821648   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.321711   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.821520   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.321732   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.821325   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.321783   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.821958   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:21.321139   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.208958   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:18.209350   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:18.209379   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:18.209302   80407 retry.go:31] will retry after 3.922749943s: waiting for machine to come up
	I0816 00:34:19.426265   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.926480   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.134804   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135230   78489 main.go:141] libmachine: (no-preload-819398) Found IP for machine: 192.168.61.15
	I0816 00:34:22.135266   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has current primary IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135292   78489 main.go:141] libmachine: (no-preload-819398) Reserving static IP address...
	I0816 00:34:22.135596   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.135629   78489 main.go:141] libmachine: (no-preload-819398) DBG | skip adding static IP to network mk-no-preload-819398 - found existing host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"}
	I0816 00:34:22.135644   78489 main.go:141] libmachine: (no-preload-819398) Reserved static IP address: 192.168.61.15
	I0816 00:34:22.135661   78489 main.go:141] libmachine: (no-preload-819398) Waiting for SSH to be available...
	I0816 00:34:22.135675   78489 main.go:141] libmachine: (no-preload-819398) DBG | Getting to WaitForSSH function...
	I0816 00:34:22.137639   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.137925   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.137956   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.138099   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH client type: external
	I0816 00:34:22.138141   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa (-rw-------)
	I0816 00:34:22.138198   78489 main.go:141] libmachine: (no-preload-819398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:22.138233   78489 main.go:141] libmachine: (no-preload-819398) DBG | About to run SSH command:
	I0816 00:34:22.138248   78489 main.go:141] libmachine: (no-preload-819398) DBG | exit 0
	I0816 00:34:22.262094   78489 main.go:141] libmachine: (no-preload-819398) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:22.262496   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetConfigRaw
	I0816 00:34:22.263081   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.265419   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.265746   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.265782   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.266097   78489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/config.json ...
	I0816 00:34:22.266283   78489 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:22.266301   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:22.266501   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.268848   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269269   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.269308   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269356   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.269537   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269684   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269803   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.269971   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.270185   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.270197   78489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:22.374848   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:22.374880   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375169   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:34:22.375195   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375407   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.378309   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378649   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.378678   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378853   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.379060   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379203   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379362   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.379568   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.379735   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.379749   78489 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-819398 && echo "no-preload-819398" | sudo tee /etc/hostname
	I0816 00:34:22.496438   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-819398
	
	I0816 00:34:22.496467   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.499101   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499411   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.499443   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499703   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.499912   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500116   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500247   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.500419   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.500624   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.500650   78489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-819398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-819398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-819398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:22.619769   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:22.619802   78489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:22.619826   78489 buildroot.go:174] setting up certificates
	I0816 00:34:22.619837   78489 provision.go:84] configureAuth start
	I0816 00:34:22.619847   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.620106   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.623130   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623485   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.623510   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623629   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.625964   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626308   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.626335   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626475   78489 provision.go:143] copyHostCerts
	I0816 00:34:22.626536   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:22.626557   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:22.626629   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:22.626756   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:22.626768   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:22.626798   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:22.626889   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:22.626899   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:22.626925   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:22.627008   78489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.no-preload-819398 san=[127.0.0.1 192.168.61.15 localhost minikube no-preload-819398]
	I0816 00:34:22.710036   78489 provision.go:177] copyRemoteCerts
	I0816 00:34:22.710093   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:22.710120   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.712944   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713380   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.713409   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713612   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.713780   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.713926   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.714082   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:22.800996   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:22.828264   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:34:22.855258   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:22.880981   78489 provision.go:87] duration metric: took 261.134406ms to configureAuth
	I0816 00:34:22.881013   78489 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:22.881176   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:22.881240   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.883962   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884348   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.884368   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884611   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.884828   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885052   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885248   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.885448   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.885639   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.885661   78489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:23.154764   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:23.154802   78489 machine.go:96] duration metric: took 888.504728ms to provisionDockerMachine
	I0816 00:34:23.154821   78489 start.go:293] postStartSetup for "no-preload-819398" (driver="kvm2")
	I0816 00:34:23.154837   78489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:23.154860   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.155176   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:23.155205   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.158105   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158482   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.158517   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158674   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.158864   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.159039   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.159198   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.241041   78489 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:23.245237   78489 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:23.245260   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:23.245324   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:23.245398   78489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:23.245480   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:23.254735   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:23.279620   78489 start.go:296] duration metric: took 124.783636ms for postStartSetup
	I0816 00:34:23.279668   78489 fix.go:56] duration metric: took 19.100951861s for fixHost
	I0816 00:34:23.279693   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.282497   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.282959   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.282981   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.283184   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.283376   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283514   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283687   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.283870   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:23.284027   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:23.284037   78489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:23.390632   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768463.360038650
	
	I0816 00:34:23.390658   78489 fix.go:216] guest clock: 1723768463.360038650
	I0816 00:34:23.390668   78489 fix.go:229] Guest: 2024-08-16 00:34:23.36003865 +0000 UTC Remote: 2024-08-16 00:34:23.27967333 +0000 UTC m=+356.445975156 (delta=80.36532ms)
	I0816 00:34:23.390697   78489 fix.go:200] guest clock delta is within tolerance: 80.36532ms
	I0816 00:34:23.390710   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 19.212026147s
	I0816 00:34:23.390729   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.390977   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:23.393728   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394050   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.394071   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394255   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394722   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394895   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394977   78489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:23.395028   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.395135   78489 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:23.395151   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.397773   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.397939   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398196   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398237   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398354   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398480   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398507   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398515   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398717   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.398722   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398887   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398884   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.399029   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.399164   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.497983   78489 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:23.503896   78489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:23.660357   78489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:23.666714   78489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:23.666775   78489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:23.684565   78489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:23.684586   78489 start.go:495] detecting cgroup driver to use...
	I0816 00:34:23.684655   78489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:23.701981   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:23.715786   78489 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:23.715852   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:23.733513   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:23.748705   78489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:23.866341   78489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:24.016845   78489 docker.go:233] disabling docker service ...
	I0816 00:34:24.016918   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:24.032673   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:24.046465   78489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:24.184862   78489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:24.309066   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:24.323818   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:24.344352   78489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:34:24.344422   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.355015   78489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:24.355093   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.365665   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.377238   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.388619   78489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:24.399306   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.410087   78489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.428465   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.439026   78489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:24.448856   78489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:24.448943   78489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:24.463002   78489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:24.473030   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:24.587542   78489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:24.719072   78489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:24.719159   78489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:24.723789   78489 start.go:563] Will wait 60s for crictl version
	I0816 00:34:24.723842   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:24.727616   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:24.766517   78489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:24.766600   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.795204   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.824529   78489 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:34:20.376278   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.376510   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:24.876314   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.822114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.321350   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.821541   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.322014   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.821938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.321883   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.821178   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.321881   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.821199   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:26.321573   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.825725   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:24.828458   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829018   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:24.829045   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829336   78489 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:24.833711   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:24.847017   78489 kubeadm.go:883] updating cluster {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:24.847136   78489 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:34:24.847171   78489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:24.883489   78489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:34:24.883515   78489 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:24.883592   78489 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.883612   78489 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.883664   78489 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:24.883690   78489 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.883719   78489 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.883595   78489 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.883927   78489 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.884016   78489 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885061   78489 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.885185   78489 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885207   78489 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.885204   78489 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.885225   78489 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.042311   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.042317   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.048181   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.050502   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.059137   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.091688   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 00:34:25.096653   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.126261   78489 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 00:34:25.126311   78489 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.126368   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.164673   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.189972   78489 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 00:34:25.190014   78489 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.190051   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249632   78489 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 00:34:25.249674   78489 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.249717   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249780   78489 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 00:34:25.249824   78489 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.249884   78489 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 00:34:25.249910   78489 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.249887   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249942   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360038   78489 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 00:34:25.360082   78489 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.360121   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360133   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.360191   78489 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 00:34:25.360208   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.360221   78489 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.360256   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360283   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.360326   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.360337   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.462610   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.462691   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.480037   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.480114   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.480176   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.480211   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.489343   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.642853   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.642913   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.642963   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.645719   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.645749   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.645833   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.645899   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.802574   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 00:34:25.802645   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.802687   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.802728   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.808235   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 00:34:25.808330   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 00:34:25.808387   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 00:34:25.808401   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 00:34:25.808432   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:25.808334   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:25.808471   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:25.808480   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:25.816510   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 00:34:25.816527   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.816560   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.885445   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 00:34:25.885532   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 00:34:25.885549   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:25.885588   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 00:34:25.885600   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:25.885674   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 00:34:25.885690   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 00:34:25.885711   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 00:34:24.426102   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.927534   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.877013   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:29.378108   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.821489   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.322094   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.321201   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.821854   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.321188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.821729   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.321316   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.821998   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:31.322184   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.938767   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.122182459s)
	I0816 00:34:27.938804   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 00:34:27.938801   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.05323098s)
	I0816 00:34:27.938826   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.05321158s)
	I0816 00:34:27.938831   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 00:34:27.938833   78489 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:27.938843   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 00:34:27.938906   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:31.645449   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.706515577s)
	I0816 00:34:31.645486   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 00:34:31.645514   78489 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:31.645563   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:29.427463   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.927253   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.875608   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:33.876822   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.821361   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.321205   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.822088   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.322126   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.821956   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.321921   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.821245   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.822034   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:36.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.625714   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.980118908s)
	I0816 00:34:33.625749   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 00:34:33.625773   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:33.625824   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:35.680134   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054281396s)
	I0816 00:34:35.680167   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 00:34:35.680209   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:35.680276   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:34.426416   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.427589   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:38.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:35.877327   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:37.877385   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.821567   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.321329   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.822169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.321832   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.821404   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.321406   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.821914   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.322169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.821149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:41.322125   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.430152   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.749849436s)
	I0816 00:34:37.430180   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 00:34:37.430208   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:37.430254   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:39.684335   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.254047221s)
	I0816 00:34:39.684365   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 00:34:39.684391   78489 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:39.684445   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:40.328672   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 00:34:40.328722   78489 cache_images.go:123] Successfully loaded all cached images
	I0816 00:34:40.328729   78489 cache_images.go:92] duration metric: took 15.445200533s to LoadCachedImages
	I0816 00:34:40.328743   78489 kubeadm.go:934] updating node { 192.168.61.15 8443 v1.31.0 crio true true} ...
	I0816 00:34:40.328897   78489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-819398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:40.328994   78489 ssh_runner.go:195] Run: crio config
	I0816 00:34:40.383655   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:40.383675   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:40.383685   78489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:40.383712   78489 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.15 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-819398 NodeName:no-preload-819398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:34:40.383855   78489 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-819398"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:40.383930   78489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:34:40.395384   78489 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:40.395457   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:40.405037   78489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 00:34:40.423278   78489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:40.440963   78489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 00:34:40.458845   78489 ssh_runner.go:195] Run: grep 192.168.61.15	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:40.462574   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:40.475524   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:40.614624   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:40.632229   78489 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398 for IP: 192.168.61.15
	I0816 00:34:40.632252   78489 certs.go:194] generating shared ca certs ...
	I0816 00:34:40.632267   78489 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:40.632430   78489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:40.632483   78489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:40.632497   78489 certs.go:256] generating profile certs ...
	I0816 00:34:40.632598   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/client.key
	I0816 00:34:40.632679   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key.a9de72ef
	I0816 00:34:40.632759   78489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key
	I0816 00:34:40.632919   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:40.632962   78489 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:40.632978   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:40.633011   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:40.633042   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:40.633068   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:40.633124   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:40.633963   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:40.676094   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:40.707032   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:40.740455   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:40.778080   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 00:34:40.809950   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:34:40.841459   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:40.866708   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:34:40.893568   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:40.917144   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:40.942349   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:40.966731   78489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:40.984268   78489 ssh_runner.go:195] Run: openssl version
	I0816 00:34:40.990614   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:41.002909   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007595   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007645   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.013618   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:41.024886   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:41.036350   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040801   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040845   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.046554   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:41.057707   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:41.069566   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074107   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074159   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.080113   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:41.091854   78489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:41.096543   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:41.102883   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:41.109228   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:41.115622   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:41.121895   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:41.128016   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:41.134126   78489 kubeadm.go:392] StartCluster: {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:41.134230   78489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:41.134310   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.178898   78489 cri.go:89] found id: ""
	I0816 00:34:41.178972   78489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:41.190167   78489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:41.190184   78489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:41.190223   78489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:41.200385   78489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:41.201824   78489 kubeconfig.go:125] found "no-preload-819398" server: "https://192.168.61.15:8443"
	I0816 00:34:41.204812   78489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:41.225215   78489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.15
	I0816 00:34:41.225252   78489 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:41.225265   78489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:41.225323   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.269288   78489 cri.go:89] found id: ""
	I0816 00:34:41.269377   78489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:41.286238   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:41.297713   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:41.297732   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:41.297782   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:41.308635   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:41.308695   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:41.320045   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:41.329866   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:41.329952   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:41.341488   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.351018   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:41.351083   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.360845   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:41.370730   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:41.370808   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:41.382572   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:41.392544   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.515558   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.425671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:43.426507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:40.377638   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:42.877395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:41.821459   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.321938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.822038   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.321447   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.821571   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.321428   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.821496   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:46.322149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.610068   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.094473643s)
	I0816 00:34:42.610106   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.850562   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.916519   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:43.042025   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:43.042117   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.543065   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.043098   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.061154   78489 api_server.go:72] duration metric: took 1.019134992s to wait for apiserver process to appear ...
	I0816 00:34:44.061180   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:34:44.061199   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.718683   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.718717   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:46.718730   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.785528   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.785559   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:47.061692   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.066556   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.066590   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:47.562057   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.569664   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.569699   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:48.061258   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:48.065926   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:34:48.073136   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:34:48.073165   78489 api_server.go:131] duration metric: took 4.011977616s to wait for apiserver health ...
	I0816 00:34:48.073179   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:48.073189   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:48.075105   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:34:45.925817   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.925984   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:45.376424   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.377794   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.876764   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:46.822140   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.321575   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.321365   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.822009   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.321536   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.821189   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.321387   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.821982   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:51.322075   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.076340   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:34:48.113148   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:34:48.152316   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:34:48.166108   78489 system_pods.go:59] 8 kube-system pods found
	I0816 00:34:48.166142   78489 system_pods.go:61] "coredns-6f6b679f8f-sv454" [5ba1d55f-4455-4ad1-b3c8-7671ce481dd2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:34:48.166154   78489 system_pods.go:61] "etcd-no-preload-819398" [b5e55df3-fb20-4980-928f-31217bf25351] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:34:48.166164   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [7670f41c-8439-4782-a3c8-077a144d2998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:34:48.166175   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [61a6080a-5e65-4400-b230-0703f347fc17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:34:48.166182   78489 system_pods.go:61] "kube-proxy-xdm7w" [9d0517c5-8cf7-47a0-86d0-c674677e9f46] Running
	I0816 00:34:48.166191   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [af346e37-312a-4225-b3bf-0ddda71022dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:34:48.166204   78489 system_pods.go:61] "metrics-server-6867b74b74-mm5l7" [2ebc3f9f-e1a7-47b6-849e-6a4995d13206] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:34:48.166214   78489 system_pods.go:61] "storage-provisioner" [745bbfbd-aedb-4e68-946e-5a7ead1d5b48] Running
	I0816 00:34:48.166223   78489 system_pods.go:74] duration metric: took 13.883212ms to wait for pod list to return data ...
	I0816 00:34:48.166235   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:34:48.170444   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:34:48.170478   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:34:48.170492   78489 node_conditions.go:105] duration metric: took 4.251703ms to run NodePressure ...
	I0816 00:34:48.170520   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:48.437519   78489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:34:48.441992   78489 kubeadm.go:739] kubelet initialised
	I0816 00:34:48.442015   78489 kubeadm.go:740] duration metric: took 4.465986ms waiting for restarted kubelet to initialise ...
	I0816 00:34:48.442025   78489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:48.447127   78489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:50.453956   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.926184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.926515   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.876909   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.376236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.822066   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.321534   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.821154   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.321256   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.821510   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.321984   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.821175   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.321601   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:56.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.454122   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.954716   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.426224   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.926472   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.376394   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:58.876502   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.821891   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.321266   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.821346   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.321718   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.821304   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.821302   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.821563   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:01.321323   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.453951   78489 pod_ready.go:93] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:57.453974   78489 pod_ready.go:82] duration metric: took 9.00682228s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:57.453983   78489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.460582   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.961243   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:00.961269   78489 pod_ready.go:82] duration metric: took 3.507278873s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:00.961279   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468020   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:01.468047   78489 pod_ready.go:82] duration metric: took 506.758881ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468060   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.425956   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.925967   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.876678   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:03.376662   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.821317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.321560   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.821707   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.322110   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.821327   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.321430   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.821935   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.321559   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.821373   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.975498   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.975522   78489 pod_ready.go:82] duration metric: took 1.50745395s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.975531   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980290   78489 pod_ready.go:93] pod "kube-proxy-xdm7w" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.980316   78489 pod_ready.go:82] duration metric: took 4.778704ms for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980328   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988237   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.988260   78489 pod_ready.go:82] duration metric: took 7.924207ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988268   78489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:04.993992   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:04.426419   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.426648   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.927578   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:05.877102   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:07.877187   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.821405   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.321781   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.821420   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.321483   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.821347   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.321167   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.821188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.821179   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:11.322114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.994539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.995530   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.494248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.425605   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:13.426338   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:10.378729   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:12.875673   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:14.876717   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.822105   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.321963   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.822172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.321805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.821971   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:14.321784   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:14.321882   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:14.360939   79191 cri.go:89] found id: ""
	I0816 00:35:14.360962   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.360971   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:14.360976   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:14.361028   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:14.397796   79191 cri.go:89] found id: ""
	I0816 00:35:14.397824   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.397836   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:14.397858   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:14.397922   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:14.433924   79191 cri.go:89] found id: ""
	I0816 00:35:14.433950   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.433960   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:14.433968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:14.434024   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:14.468657   79191 cri.go:89] found id: ""
	I0816 00:35:14.468685   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.468696   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:14.468704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:14.468770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:14.505221   79191 cri.go:89] found id: ""
	I0816 00:35:14.505247   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.505256   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:14.505264   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:14.505323   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:14.546032   79191 cri.go:89] found id: ""
	I0816 00:35:14.546062   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.546072   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:14.546079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:14.546147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:14.581260   79191 cri.go:89] found id: ""
	I0816 00:35:14.581284   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.581292   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:14.581298   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:14.581352   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:14.616103   79191 cri.go:89] found id: ""
	I0816 00:35:14.616127   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.616134   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:14.616142   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:14.616153   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:14.690062   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:14.690106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:14.735662   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:14.735699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:14.786049   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:14.786086   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:14.800375   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:14.800405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:14.931822   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:13.494676   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.497759   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.925671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.926279   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.375842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.376005   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.432686   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:17.448728   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:17.448806   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:17.496384   79191 cri.go:89] found id: ""
	I0816 00:35:17.496523   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.496568   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:17.496581   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:17.496646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:17.560779   79191 cri.go:89] found id: ""
	I0816 00:35:17.560810   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.560820   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:17.560829   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:17.560891   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:17.606007   79191 cri.go:89] found id: ""
	I0816 00:35:17.606036   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.606047   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:17.606054   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:17.606123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:17.639910   79191 cri.go:89] found id: ""
	I0816 00:35:17.639937   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.639945   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:17.639951   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:17.640030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:17.676534   79191 cri.go:89] found id: ""
	I0816 00:35:17.676563   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.676573   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:17.676581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:17.676645   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:17.716233   79191 cri.go:89] found id: ""
	I0816 00:35:17.716255   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.716262   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:17.716268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:17.716334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:17.753648   79191 cri.go:89] found id: ""
	I0816 00:35:17.753686   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.753696   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:17.753704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:17.753763   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:17.791670   79191 cri.go:89] found id: ""
	I0816 00:35:17.791694   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.791702   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:17.791711   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:17.791722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:17.840616   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:17.840650   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:17.854949   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:17.854981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:17.933699   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:17.933724   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:17.933750   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:18.010177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:18.010211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:20.551384   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:20.564463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:20.564540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:20.604361   79191 cri.go:89] found id: ""
	I0816 00:35:20.604389   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.604399   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:20.604405   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:20.604453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:20.639502   79191 cri.go:89] found id: ""
	I0816 00:35:20.639528   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.639535   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:20.639541   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:20.639590   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:20.676430   79191 cri.go:89] found id: ""
	I0816 00:35:20.676476   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.676484   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:20.676496   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:20.676551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:20.711213   79191 cri.go:89] found id: ""
	I0816 00:35:20.711243   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.711253   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:20.711261   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:20.711320   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:20.745533   79191 cri.go:89] found id: ""
	I0816 00:35:20.745563   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.745574   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:20.745581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:20.745644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:20.781031   79191 cri.go:89] found id: ""
	I0816 00:35:20.781056   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.781064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:20.781071   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:20.781119   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:20.819966   79191 cri.go:89] found id: ""
	I0816 00:35:20.819994   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.820005   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:20.820012   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:20.820096   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:20.859011   79191 cri.go:89] found id: ""
	I0816 00:35:20.859041   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.859052   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:20.859063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:20.859078   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:20.909479   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:20.909513   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:20.925627   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:20.925653   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:21.001707   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:21.001733   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:21.001747   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:21.085853   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:21.085893   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:17.994492   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:20.496255   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:22.426663   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:21.878587   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.377462   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:23.626499   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:23.640337   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:23.640395   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:23.679422   79191 cri.go:89] found id: ""
	I0816 00:35:23.679449   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.679457   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:23.679463   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:23.679522   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:23.716571   79191 cri.go:89] found id: ""
	I0816 00:35:23.716594   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.716601   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:23.716607   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:23.716660   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:23.752539   79191 cri.go:89] found id: ""
	I0816 00:35:23.752563   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.752573   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:23.752581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:23.752640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:23.790665   79191 cri.go:89] found id: ""
	I0816 00:35:23.790693   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.790700   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:23.790707   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:23.790757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:23.827695   79191 cri.go:89] found id: ""
	I0816 00:35:23.827719   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.827727   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:23.827733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:23.827792   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:23.867664   79191 cri.go:89] found id: ""
	I0816 00:35:23.867687   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.867695   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:23.867701   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:23.867776   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:23.907844   79191 cri.go:89] found id: ""
	I0816 00:35:23.907871   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.907882   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:23.907890   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:23.907951   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:23.945372   79191 cri.go:89] found id: ""
	I0816 00:35:23.945403   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.945414   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:23.945424   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:23.945438   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:23.998270   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:23.998302   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:24.012794   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:24.012824   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:24.087285   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:24.087308   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:24.087340   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:24.167151   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:24.167184   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:26.710285   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:26.724394   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:26.724453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:26.764667   79191 cri.go:89] found id: ""
	I0816 00:35:26.764690   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.764698   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:26.764704   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:26.764756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:22.994036   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.995035   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.927042   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:27.426054   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.877007   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.376563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.806631   79191 cri.go:89] found id: ""
	I0816 00:35:26.806660   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.806670   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:26.806677   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:26.806741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:26.843434   79191 cri.go:89] found id: ""
	I0816 00:35:26.843473   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.843485   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:26.843493   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:26.843576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:26.882521   79191 cri.go:89] found id: ""
	I0816 00:35:26.882556   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.882566   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:26.882574   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:26.882635   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:26.917956   79191 cri.go:89] found id: ""
	I0816 00:35:26.917985   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.917995   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:26.918004   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:26.918056   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:26.953168   79191 cri.go:89] found id: ""
	I0816 00:35:26.953191   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.953199   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:26.953205   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:26.953251   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:26.991366   79191 cri.go:89] found id: ""
	I0816 00:35:26.991397   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.991408   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:26.991416   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:26.991479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:27.028591   79191 cri.go:89] found id: ""
	I0816 00:35:27.028619   79191 logs.go:276] 0 containers: []
	W0816 00:35:27.028626   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:27.028635   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:27.028647   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:27.111613   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:27.111645   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:27.153539   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:27.153575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:27.209377   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:27.209420   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:27.223316   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:27.223343   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:27.301411   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:29.801803   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:29.815545   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:29.815626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:29.853638   79191 cri.go:89] found id: ""
	I0816 00:35:29.853668   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.853678   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:29.853687   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:29.853756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:29.892532   79191 cri.go:89] found id: ""
	I0816 00:35:29.892554   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.892561   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:29.892567   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:29.892622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:29.932486   79191 cri.go:89] found id: ""
	I0816 00:35:29.932511   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.932519   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:29.932524   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:29.932580   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:29.973161   79191 cri.go:89] found id: ""
	I0816 00:35:29.973194   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.973205   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:29.973213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:29.973275   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:30.009606   79191 cri.go:89] found id: ""
	I0816 00:35:30.009629   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.009637   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:30.009643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:30.009691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:30.045016   79191 cri.go:89] found id: ""
	I0816 00:35:30.045043   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.045050   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:30.045057   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:30.045113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:30.079934   79191 cri.go:89] found id: ""
	I0816 00:35:30.079959   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.079968   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:30.079974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:30.080030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:30.114173   79191 cri.go:89] found id: ""
	I0816 00:35:30.114199   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.114207   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:30.114216   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:30.114227   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:30.154765   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:30.154791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:30.204410   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:30.204442   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:30.218909   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:30.218934   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:30.294141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:30.294161   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:30.294193   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:26.995394   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.494569   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.426234   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.926349   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.926433   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.376976   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.377869   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:32.872216   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:32.886211   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:32.886289   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:32.929416   79191 cri.go:89] found id: ""
	I0816 00:35:32.929440   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.929449   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:32.929456   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:32.929520   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:32.977862   79191 cri.go:89] found id: ""
	I0816 00:35:32.977887   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.977896   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:32.977920   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:32.977978   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:33.015569   79191 cri.go:89] found id: ""
	I0816 00:35:33.015593   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.015603   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:33.015622   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:33.015681   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:33.050900   79191 cri.go:89] found id: ""
	I0816 00:35:33.050934   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.050943   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:33.050959   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:33.051033   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:33.084529   79191 cri.go:89] found id: ""
	I0816 00:35:33.084556   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.084564   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:33.084569   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:33.084619   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:33.119819   79191 cri.go:89] found id: ""
	I0816 00:35:33.119845   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.119855   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:33.119863   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:33.119928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:33.159922   79191 cri.go:89] found id: ""
	I0816 00:35:33.159952   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.159959   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:33.159965   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:33.160023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:33.194977   79191 cri.go:89] found id: ""
	I0816 00:35:33.195006   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.195018   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:33.195030   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:33.195044   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:33.208578   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:33.208623   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:33.282177   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:33.282198   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:33.282211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:33.365514   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:33.365552   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:33.405190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:33.405226   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:35.959033   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:35.971866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:35.971934   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:36.008442   79191 cri.go:89] found id: ""
	I0816 00:35:36.008473   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.008483   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:36.008489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:36.008547   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:36.044346   79191 cri.go:89] found id: ""
	I0816 00:35:36.044374   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.044386   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:36.044393   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:36.044444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:36.083078   79191 cri.go:89] found id: ""
	I0816 00:35:36.083104   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.083112   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:36.083118   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:36.083166   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:36.120195   79191 cri.go:89] found id: ""
	I0816 00:35:36.120218   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.120226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:36.120232   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:36.120288   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:36.156186   79191 cri.go:89] found id: ""
	I0816 00:35:36.156215   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.156225   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:36.156233   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:36.156295   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:36.195585   79191 cri.go:89] found id: ""
	I0816 00:35:36.195613   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.195623   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:36.195631   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:36.195699   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:36.231110   79191 cri.go:89] found id: ""
	I0816 00:35:36.231133   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.231141   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:36.231147   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:36.231210   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:36.268745   79191 cri.go:89] found id: ""
	I0816 00:35:36.268770   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.268778   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:36.268786   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:36.268800   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:36.282225   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:36.282251   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:36.351401   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:36.351431   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:36.351447   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:36.429970   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:36.430003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:36.473745   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:36.473776   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:31.994163   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.994256   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.995188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:36.427247   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.926123   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.877303   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:39.027444   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:39.041107   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:39.041170   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:39.079807   79191 cri.go:89] found id: ""
	I0816 00:35:39.079830   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.079837   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:39.079843   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:39.079890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:39.115532   79191 cri.go:89] found id: ""
	I0816 00:35:39.115559   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.115569   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:39.115576   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:39.115623   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:39.150197   79191 cri.go:89] found id: ""
	I0816 00:35:39.150222   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.150233   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:39.150241   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:39.150300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:39.186480   79191 cri.go:89] found id: ""
	I0816 00:35:39.186507   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.186515   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:39.186521   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:39.186572   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:39.221576   79191 cri.go:89] found id: ""
	I0816 00:35:39.221605   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.221615   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:39.221620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:39.221669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:39.259846   79191 cri.go:89] found id: ""
	I0816 00:35:39.259877   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.259888   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:39.259896   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:39.259950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:39.294866   79191 cri.go:89] found id: ""
	I0816 00:35:39.294891   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.294898   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:39.294903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:39.294952   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:39.329546   79191 cri.go:89] found id: ""
	I0816 00:35:39.329576   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.329584   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:39.329593   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:39.329604   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:39.371579   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:39.371609   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:39.422903   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:39.422935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:39.437673   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:39.437699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:39.515146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:39.515171   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:39.515185   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:38.495377   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.495856   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.926444   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:43.426438   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.376648   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.877521   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.101733   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:42.115563   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:42.115640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:42.155187   79191 cri.go:89] found id: ""
	I0816 00:35:42.155216   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.155224   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:42.155230   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:42.155282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:42.194414   79191 cri.go:89] found id: ""
	I0816 00:35:42.194444   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.194456   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:42.194464   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:42.194523   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:42.234219   79191 cri.go:89] found id: ""
	I0816 00:35:42.234245   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.234253   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:42.234259   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:42.234314   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:42.272278   79191 cri.go:89] found id: ""
	I0816 00:35:42.272304   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.272314   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:42.272322   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:42.272381   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:42.309973   79191 cri.go:89] found id: ""
	I0816 00:35:42.309999   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.310007   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:42.310013   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:42.310066   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:42.350745   79191 cri.go:89] found id: ""
	I0816 00:35:42.350773   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.350782   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:42.350790   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:42.350853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:42.387775   79191 cri.go:89] found id: ""
	I0816 00:35:42.387803   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.387813   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:42.387832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:42.387902   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:42.425086   79191 cri.go:89] found id: ""
	I0816 00:35:42.425110   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.425118   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:42.425125   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:42.425138   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:42.515543   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:42.515575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:42.558348   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:42.558372   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:42.613026   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:42.613059   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.628907   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:42.628932   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:42.710265   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.211083   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:45.225001   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:45.225083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:45.258193   79191 cri.go:89] found id: ""
	I0816 00:35:45.258223   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.258232   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:45.258240   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:45.258297   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:45.294255   79191 cri.go:89] found id: ""
	I0816 00:35:45.294278   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.294286   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:45.294291   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:45.294335   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:45.329827   79191 cri.go:89] found id: ""
	I0816 00:35:45.329875   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.329886   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:45.329894   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:45.329944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:45.366095   79191 cri.go:89] found id: ""
	I0816 00:35:45.366124   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.366134   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:45.366141   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:45.366202   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:45.402367   79191 cri.go:89] found id: ""
	I0816 00:35:45.402390   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.402398   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:45.402403   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:45.402449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:45.439272   79191 cri.go:89] found id: ""
	I0816 00:35:45.439293   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.439300   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:45.439310   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:45.439358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:45.474351   79191 cri.go:89] found id: ""
	I0816 00:35:45.474380   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.474388   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:45.474393   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:45.474445   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:45.519636   79191 cri.go:89] found id: ""
	I0816 00:35:45.519661   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.519671   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:45.519680   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:45.519695   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:45.593425   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.593446   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:45.593458   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:45.668058   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:45.668095   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:45.716090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:45.716125   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:45.774177   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:45.774207   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.495914   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:44.996641   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.426740   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.925719   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.376025   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.376628   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.876035   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:48.288893   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:48.302256   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:48.302321   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:48.337001   79191 cri.go:89] found id: ""
	I0816 00:35:48.337030   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.337041   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:48.337048   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:48.337110   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:48.378341   79191 cri.go:89] found id: ""
	I0816 00:35:48.378367   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.378375   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:48.378384   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:48.378447   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:48.414304   79191 cri.go:89] found id: ""
	I0816 00:35:48.414383   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.414402   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:48.414410   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:48.414473   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:48.453946   79191 cri.go:89] found id: ""
	I0816 00:35:48.453969   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.453976   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:48.453982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:48.454036   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:48.489597   79191 cri.go:89] found id: ""
	I0816 00:35:48.489617   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.489623   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:48.489629   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:48.489672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:48.524195   79191 cri.go:89] found id: ""
	I0816 00:35:48.524222   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.524232   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:48.524239   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:48.524293   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:48.567854   79191 cri.go:89] found id: ""
	I0816 00:35:48.567880   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.567890   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:48.567897   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:48.567956   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:48.603494   79191 cri.go:89] found id: ""
	I0816 00:35:48.603520   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.603530   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:48.603540   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:48.603556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:48.642927   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:48.642960   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:48.693761   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:48.693791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:48.708790   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:48.708818   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:48.780072   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:48.780092   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:48.780106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.362108   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:51.376113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:51.376185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:51.413988   79191 cri.go:89] found id: ""
	I0816 00:35:51.414022   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.414033   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:51.414041   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:51.414101   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:51.460901   79191 cri.go:89] found id: ""
	I0816 00:35:51.460937   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.460948   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:51.460956   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:51.461019   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:51.497178   79191 cri.go:89] found id: ""
	I0816 00:35:51.497205   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.497215   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:51.497223   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:51.497365   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:51.534559   79191 cri.go:89] found id: ""
	I0816 00:35:51.534589   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.534600   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:51.534607   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:51.534668   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:51.570258   79191 cri.go:89] found id: ""
	I0816 00:35:51.570280   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.570287   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:51.570293   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:51.570356   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:51.609639   79191 cri.go:89] found id: ""
	I0816 00:35:51.609665   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.609675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:51.609683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:51.609742   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:51.645629   79191 cri.go:89] found id: ""
	I0816 00:35:51.645652   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.645659   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:51.645664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:51.645731   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:51.683325   79191 cri.go:89] found id: ""
	I0816 00:35:51.683344   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.683351   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:51.683358   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:51.683369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:51.739101   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:51.739133   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:51.753436   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:51.753466   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:35:47.494904   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.495416   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.926975   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:51.928318   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:52.376854   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.880623   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:35:51.831242   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:51.831268   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:51.831294   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.926924   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:51.926970   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:54.472667   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:54.486706   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:54.486785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:54.524180   79191 cri.go:89] found id: ""
	I0816 00:35:54.524203   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.524211   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:54.524216   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:54.524273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:54.563758   79191 cri.go:89] found id: ""
	I0816 00:35:54.563781   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.563788   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:54.563795   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:54.563859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:54.599442   79191 cri.go:89] found id: ""
	I0816 00:35:54.599471   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.599481   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:54.599488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:54.599553   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:54.633521   79191 cri.go:89] found id: ""
	I0816 00:35:54.633547   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.633558   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:54.633565   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:54.633628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:54.670036   79191 cri.go:89] found id: ""
	I0816 00:35:54.670064   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.670075   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:54.670083   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:54.670148   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:54.707565   79191 cri.go:89] found id: ""
	I0816 00:35:54.707587   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.707594   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:54.707600   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:54.707659   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:54.744500   79191 cri.go:89] found id: ""
	I0816 00:35:54.744530   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.744541   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:54.744548   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:54.744612   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:54.778964   79191 cri.go:89] found id: ""
	I0816 00:35:54.778988   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.778995   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:54.779007   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:54.779020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:54.831806   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:54.831838   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:54.845954   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:54.845979   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:54.921817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:54.921855   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:54.921871   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:55.006401   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:55.006439   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:51.996591   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.495673   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.427044   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:56.927184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.376333   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.548661   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:57.562489   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:57.562549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:57.597855   79191 cri.go:89] found id: ""
	I0816 00:35:57.597881   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.597891   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:57.597899   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:57.597961   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:57.634085   79191 cri.go:89] found id: ""
	I0816 00:35:57.634114   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.634126   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:57.634133   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:57.634193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:57.671748   79191 cri.go:89] found id: ""
	I0816 00:35:57.671779   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.671788   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:57.671795   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:57.671859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:57.708836   79191 cri.go:89] found id: ""
	I0816 00:35:57.708862   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.708870   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:57.708877   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:57.708940   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:57.744601   79191 cri.go:89] found id: ""
	I0816 00:35:57.744630   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.744639   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:57.744645   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:57.744706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:57.781888   79191 cri.go:89] found id: ""
	I0816 00:35:57.781919   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.781929   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:57.781937   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:57.781997   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:57.822612   79191 cri.go:89] found id: ""
	I0816 00:35:57.822634   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.822641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:57.822647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:57.822706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:57.873968   79191 cri.go:89] found id: ""
	I0816 00:35:57.873998   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.874008   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:57.874019   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:57.874037   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:57.896611   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:57.896643   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:57.995575   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:57.995597   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:57.995612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:58.077196   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:58.077230   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:58.116956   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:58.116985   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:00.664805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:00.678425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:00.678501   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:00.715522   79191 cri.go:89] found id: ""
	I0816 00:36:00.715548   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.715557   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:00.715562   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:00.715608   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:00.749892   79191 cri.go:89] found id: ""
	I0816 00:36:00.749920   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.749931   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:00.749938   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:00.750006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:00.787302   79191 cri.go:89] found id: ""
	I0816 00:36:00.787325   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.787332   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:00.787338   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:00.787392   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:00.821866   79191 cri.go:89] found id: ""
	I0816 00:36:00.821894   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.821906   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:00.821914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:00.821971   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:00.856346   79191 cri.go:89] found id: ""
	I0816 00:36:00.856369   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.856377   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:00.856382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:00.856431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:00.893569   79191 cri.go:89] found id: ""
	I0816 00:36:00.893596   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.893606   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:00.893614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:00.893677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:00.930342   79191 cri.go:89] found id: ""
	I0816 00:36:00.930367   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.930378   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:00.930386   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:00.930622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:00.966039   79191 cri.go:89] found id: ""
	I0816 00:36:00.966071   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.966085   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:00.966095   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:00.966109   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:01.045594   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:01.045631   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:01.089555   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:01.089586   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:01.141597   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:01.141633   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:01.156260   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:01.156286   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:01.230573   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:56.995077   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:58.995897   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.495116   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.426099   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.926011   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.927327   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.376842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.875993   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.730825   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:03.744766   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:03.744838   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:03.781095   79191 cri.go:89] found id: ""
	I0816 00:36:03.781124   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.781142   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:03.781150   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:03.781215   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:03.815637   79191 cri.go:89] found id: ""
	I0816 00:36:03.815669   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.815680   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:03.815687   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:03.815741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:03.850076   79191 cri.go:89] found id: ""
	I0816 00:36:03.850110   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.850122   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:03.850130   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:03.850185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:03.888840   79191 cri.go:89] found id: ""
	I0816 00:36:03.888863   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.888872   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:03.888879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:03.888941   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:03.928317   79191 cri.go:89] found id: ""
	I0816 00:36:03.928341   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.928350   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:03.928359   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:03.928413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:03.964709   79191 cri.go:89] found id: ""
	I0816 00:36:03.964741   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.964751   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:03.964760   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:03.964830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:03.999877   79191 cri.go:89] found id: ""
	I0816 00:36:03.999902   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.999912   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:03.999919   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:03.999981   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:04.036772   79191 cri.go:89] found id: ""
	I0816 00:36:04.036799   79191 logs.go:276] 0 containers: []
	W0816 00:36:04.036810   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:04.036820   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:04.036833   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:04.118843   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:04.118879   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:04.162491   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:04.162548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:04.215100   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:04.215134   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:04.229043   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:04.229069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:04.307480   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:03.495661   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.995711   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.426223   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:08.426470   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.876718   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:07.877431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.807640   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:06.821144   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:06.821203   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:06.857743   79191 cri.go:89] found id: ""
	I0816 00:36:06.857776   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.857786   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:06.857794   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:06.857872   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:06.895980   79191 cri.go:89] found id: ""
	I0816 00:36:06.896007   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.896018   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:06.896025   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:06.896090   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:06.935358   79191 cri.go:89] found id: ""
	I0816 00:36:06.935389   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.935399   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:06.935406   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:06.935461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:06.971533   79191 cri.go:89] found id: ""
	I0816 00:36:06.971561   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.971572   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:06.971580   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:06.971640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:07.007786   79191 cri.go:89] found id: ""
	I0816 00:36:07.007812   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.007823   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:07.007830   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:07.007890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:07.044060   79191 cri.go:89] found id: ""
	I0816 00:36:07.044092   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.044104   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:07.044112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:07.044185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:07.080058   79191 cri.go:89] found id: ""
	I0816 00:36:07.080085   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.080094   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:07.080101   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:07.080156   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:07.117749   79191 cri.go:89] found id: ""
	I0816 00:36:07.117773   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.117780   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:07.117787   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:07.117799   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:07.171418   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:07.171453   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:07.185520   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:07.185542   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:07.257817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:07.257872   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:07.257888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:07.339530   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:07.339576   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:09.882613   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:09.895873   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:09.895950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:09.936739   79191 cri.go:89] found id: ""
	I0816 00:36:09.936766   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.936774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:09.936780   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:09.936836   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:09.974145   79191 cri.go:89] found id: ""
	I0816 00:36:09.974168   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.974180   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:09.974186   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:09.974243   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:10.012166   79191 cri.go:89] found id: ""
	I0816 00:36:10.012196   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.012206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:10.012214   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:10.012265   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:10.051080   79191 cri.go:89] found id: ""
	I0816 00:36:10.051103   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.051111   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:10.051117   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:10.051176   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:10.088519   79191 cri.go:89] found id: ""
	I0816 00:36:10.088548   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.088559   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:10.088567   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:10.088628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:10.123718   79191 cri.go:89] found id: ""
	I0816 00:36:10.123744   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.123752   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:10.123758   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:10.123805   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:10.161900   79191 cri.go:89] found id: ""
	I0816 00:36:10.161922   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.161929   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:10.161995   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:10.162064   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:10.196380   79191 cri.go:89] found id: ""
	I0816 00:36:10.196408   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.196419   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:10.196429   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:10.196443   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:10.248276   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:10.248309   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:10.262241   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:10.262269   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:10.340562   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:10.340598   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:10.340626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:10.417547   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:10.417578   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:07.996930   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:09.997666   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.426502   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.426976   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.377172   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.877236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.962310   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:12.976278   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:12.976338   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:13.014501   79191 cri.go:89] found id: ""
	I0816 00:36:13.014523   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.014530   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:13.014536   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:13.014587   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:13.055942   79191 cri.go:89] found id: ""
	I0816 00:36:13.055970   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.055979   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:13.055987   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:13.056048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:13.090309   79191 cri.go:89] found id: ""
	I0816 00:36:13.090336   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.090346   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:13.090354   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:13.090413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:13.124839   79191 cri.go:89] found id: ""
	I0816 00:36:13.124865   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.124876   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:13.124884   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:13.124945   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:13.164535   79191 cri.go:89] found id: ""
	I0816 00:36:13.164560   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.164567   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:13.164573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:13.164630   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:13.198651   79191 cri.go:89] found id: ""
	I0816 00:36:13.198699   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.198710   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:13.198718   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:13.198785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:13.233255   79191 cri.go:89] found id: ""
	I0816 00:36:13.233278   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.233286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:13.233292   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:13.233348   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:13.267327   79191 cri.go:89] found id: ""
	I0816 00:36:13.267351   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.267359   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:13.267367   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:13.267384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:13.352053   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:13.352089   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:13.393438   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:13.393471   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:13.445397   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:13.445430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:13.459143   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:13.459177   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:13.530160   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.031296   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:16.045557   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:16.045618   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:16.081828   79191 cri.go:89] found id: ""
	I0816 00:36:16.081871   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.081882   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:16.081890   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:16.081949   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:16.116228   79191 cri.go:89] found id: ""
	I0816 00:36:16.116254   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.116264   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:16.116272   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:16.116334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:16.150051   79191 cri.go:89] found id: ""
	I0816 00:36:16.150079   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.150087   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:16.150093   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:16.150139   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:16.186218   79191 cri.go:89] found id: ""
	I0816 00:36:16.186241   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.186248   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:16.186254   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:16.186301   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:16.223223   79191 cri.go:89] found id: ""
	I0816 00:36:16.223255   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.223263   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:16.223270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:16.223316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:16.259929   79191 cri.go:89] found id: ""
	I0816 00:36:16.259953   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.259960   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:16.259970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:16.260099   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:16.294611   79191 cri.go:89] found id: ""
	I0816 00:36:16.294633   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.294641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:16.294649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:16.294725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:16.333492   79191 cri.go:89] found id: ""
	I0816 00:36:16.333523   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.333533   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:16.333544   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:16.333563   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:16.385970   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:16.386002   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:16.400359   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:16.400384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:16.471363   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.471388   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:16.471408   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:16.555990   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:16.556022   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:12.495406   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.995145   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.926160   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.426768   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:15.376672   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.876395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.876542   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.099502   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:19.112649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:19.112706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:19.145809   79191 cri.go:89] found id: ""
	I0816 00:36:19.145837   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.145858   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:19.145865   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:19.145928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:19.183737   79191 cri.go:89] found id: ""
	I0816 00:36:19.183763   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.183774   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:19.183781   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:19.183841   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:19.219729   79191 cri.go:89] found id: ""
	I0816 00:36:19.219756   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.219764   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:19.219770   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:19.219815   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:19.254450   79191 cri.go:89] found id: ""
	I0816 00:36:19.254474   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.254481   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:19.254488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:19.254540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:19.289543   79191 cri.go:89] found id: ""
	I0816 00:36:19.289573   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.289585   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:19.289592   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:19.289651   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:19.330727   79191 cri.go:89] found id: ""
	I0816 00:36:19.330748   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.330756   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:19.330762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:19.330809   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:19.368952   79191 cri.go:89] found id: ""
	I0816 00:36:19.368978   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.368986   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:19.368992   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:19.369048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:19.406211   79191 cri.go:89] found id: ""
	I0816 00:36:19.406247   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.406258   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:19.406268   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:19.406282   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:19.457996   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:19.458032   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:19.472247   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:19.472274   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:19.542840   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:19.542862   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:19.542876   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:19.624478   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:19.624520   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:16.997148   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.496434   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.427251   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:21.925550   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.925858   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.376318   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:24.376431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.165884   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:22.180005   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:22.180078   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:22.217434   79191 cri.go:89] found id: ""
	I0816 00:36:22.217463   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.217471   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:22.217478   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:22.217534   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:22.250679   79191 cri.go:89] found id: ""
	I0816 00:36:22.250708   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.250717   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:22.250725   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:22.250785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:22.284294   79191 cri.go:89] found id: ""
	I0816 00:36:22.284324   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.284334   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:22.284341   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:22.284403   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:22.320747   79191 cri.go:89] found id: ""
	I0816 00:36:22.320779   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.320790   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:22.320799   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:22.320858   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:22.355763   79191 cri.go:89] found id: ""
	I0816 00:36:22.355793   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.355803   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:22.355811   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:22.355871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:22.392762   79191 cri.go:89] found id: ""
	I0816 00:36:22.392788   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.392796   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:22.392802   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:22.392860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:22.426577   79191 cri.go:89] found id: ""
	I0816 00:36:22.426605   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.426614   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:22.426621   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:22.426682   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:22.459989   79191 cri.go:89] found id: ""
	I0816 00:36:22.460018   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.460030   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:22.460040   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:22.460054   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:22.545782   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:22.545820   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:22.587404   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:22.587431   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:22.638519   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:22.638559   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:22.653064   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:22.653087   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:22.734333   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.234823   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:25.248716   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:25.248787   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:25.284760   79191 cri.go:89] found id: ""
	I0816 00:36:25.284786   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.284793   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:25.284799   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:25.284870   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:25.325523   79191 cri.go:89] found id: ""
	I0816 00:36:25.325548   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.325556   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:25.325562   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:25.325621   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:25.365050   79191 cri.go:89] found id: ""
	I0816 00:36:25.365078   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.365088   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:25.365096   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:25.365155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:25.405005   79191 cri.go:89] found id: ""
	I0816 00:36:25.405038   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.405049   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:25.405062   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:25.405121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:25.444622   79191 cri.go:89] found id: ""
	I0816 00:36:25.444648   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.444656   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:25.444662   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:25.444710   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:25.485364   79191 cri.go:89] found id: ""
	I0816 00:36:25.485394   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.485404   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:25.485413   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:25.485492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:25.521444   79191 cri.go:89] found id: ""
	I0816 00:36:25.521471   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.521482   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:25.521490   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:25.521550   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:25.556763   79191 cri.go:89] found id: ""
	I0816 00:36:25.556789   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.556796   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:25.556805   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:25.556817   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:25.606725   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:25.606759   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:25.623080   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:25.623108   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:25.705238   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.705258   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:25.705280   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:25.782188   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:25.782224   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:21.994519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.995061   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.494442   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:25.926835   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.427012   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.876206   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.876563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.325018   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:28.337778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:28.337860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:28.378452   79191 cri.go:89] found id: ""
	I0816 00:36:28.378482   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.378492   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:28.378499   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:28.378556   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:28.412103   79191 cri.go:89] found id: ""
	I0816 00:36:28.412132   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.412143   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:28.412150   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:28.412214   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:28.447363   79191 cri.go:89] found id: ""
	I0816 00:36:28.447388   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.447396   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:28.447401   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:28.447452   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:28.481199   79191 cri.go:89] found id: ""
	I0816 00:36:28.481228   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.481242   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:28.481251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:28.481305   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:28.517523   79191 cri.go:89] found id: ""
	I0816 00:36:28.517545   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.517552   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:28.517558   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:28.517620   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:28.552069   79191 cri.go:89] found id: ""
	I0816 00:36:28.552101   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.552112   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:28.552120   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:28.552193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:28.594124   79191 cri.go:89] found id: ""
	I0816 00:36:28.594148   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.594158   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:28.594166   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:28.594228   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:28.631451   79191 cri.go:89] found id: ""
	I0816 00:36:28.631472   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.631480   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:28.631488   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:28.631498   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:28.685335   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:28.685368   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:28.700852   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:28.700877   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:28.773932   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:28.773957   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:28.773972   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:28.848951   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:28.848989   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:31.389208   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:31.403731   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:31.403813   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:31.440979   79191 cri.go:89] found id: ""
	I0816 00:36:31.441010   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.441020   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:31.441028   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:31.441092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:31.476435   79191 cri.go:89] found id: ""
	I0816 00:36:31.476458   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.476465   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:31.476471   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:31.476530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:31.514622   79191 cri.go:89] found id: ""
	I0816 00:36:31.514644   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.514651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:31.514657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:31.514715   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:31.554503   79191 cri.go:89] found id: ""
	I0816 00:36:31.554533   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.554543   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:31.554551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:31.554609   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:31.590283   79191 cri.go:89] found id: ""
	I0816 00:36:31.590317   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.590325   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:31.590332   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:31.590380   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:31.625969   79191 cri.go:89] found id: ""
	I0816 00:36:31.626003   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.626014   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:31.626031   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:31.626102   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:31.660489   79191 cri.go:89] found id: ""
	I0816 00:36:31.660513   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.660520   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:31.660526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:31.660583   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:31.694728   79191 cri.go:89] found id: ""
	I0816 00:36:31.694761   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.694769   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:31.694779   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:31.694790   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:31.760631   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:31.760663   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:31.774858   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:31.774886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:36:28.994228   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.994276   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.926313   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.426045   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.877175   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.378602   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:36:31.851125   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:31.851145   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:31.851156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:31.934491   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:31.934521   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:34.476368   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:34.489252   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:34.489308   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:34.524932   79191 cri.go:89] found id: ""
	I0816 00:36:34.524964   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.524972   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:34.524977   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:34.525032   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:34.559434   79191 cri.go:89] found id: ""
	I0816 00:36:34.559462   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.559473   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:34.559481   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:34.559543   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:34.598700   79191 cri.go:89] found id: ""
	I0816 00:36:34.598728   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.598739   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:34.598747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:34.598808   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:34.632413   79191 cri.go:89] found id: ""
	I0816 00:36:34.632438   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.632448   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:34.632456   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:34.632514   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:34.668385   79191 cri.go:89] found id: ""
	I0816 00:36:34.668409   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.668418   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:34.668425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:34.668486   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:34.703728   79191 cri.go:89] found id: ""
	I0816 00:36:34.703754   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.703764   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:34.703772   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:34.703832   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:34.743119   79191 cri.go:89] found id: ""
	I0816 00:36:34.743152   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.743161   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:34.743171   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:34.743230   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:34.778932   79191 cri.go:89] found id: ""
	I0816 00:36:34.778955   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.778963   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:34.778971   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:34.778987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:34.832050   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:34.832084   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:34.845700   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:34.845728   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:34.917535   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:34.917554   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:34.917565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:35.005262   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:35.005295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:32.994435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:34.994503   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.926422   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.876400   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:38.376351   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.547107   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:37.562035   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:37.562095   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:37.605992   79191 cri.go:89] found id: ""
	I0816 00:36:37.606021   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.606028   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:37.606035   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:37.606092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:37.642613   79191 cri.go:89] found id: ""
	I0816 00:36:37.642642   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.642653   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:37.642660   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:37.642708   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:37.677810   79191 cri.go:89] found id: ""
	I0816 00:36:37.677863   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.677875   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:37.677883   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:37.677939   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:37.714490   79191 cri.go:89] found id: ""
	I0816 00:36:37.714514   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.714522   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:37.714529   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:37.714575   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:37.750807   79191 cri.go:89] found id: ""
	I0816 00:36:37.750837   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.750844   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:37.750850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:37.750912   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:37.790307   79191 cri.go:89] found id: ""
	I0816 00:36:37.790337   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.790347   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:37.790355   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:37.790404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:37.826811   79191 cri.go:89] found id: ""
	I0816 00:36:37.826838   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.826848   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:37.826856   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:37.826920   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:37.862066   79191 cri.go:89] found id: ""
	I0816 00:36:37.862091   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.862101   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:37.862112   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:37.862127   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:37.917127   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:37.917161   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:37.932986   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:37.933024   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:38.008715   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:38.008739   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:38.008754   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:38.088744   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:38.088778   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:40.643426   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:40.659064   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:40.659128   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:40.702486   79191 cri.go:89] found id: ""
	I0816 00:36:40.702513   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.702523   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:40.702530   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:40.702595   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:40.736016   79191 cri.go:89] found id: ""
	I0816 00:36:40.736044   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.736057   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:40.736064   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:40.736125   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:40.779665   79191 cri.go:89] found id: ""
	I0816 00:36:40.779704   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.779724   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:40.779733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:40.779795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:40.818612   79191 cri.go:89] found id: ""
	I0816 00:36:40.818633   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.818640   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:40.818647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:40.818695   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:40.855990   79191 cri.go:89] found id: ""
	I0816 00:36:40.856014   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.856021   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:40.856027   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:40.856074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:40.894792   79191 cri.go:89] found id: ""
	I0816 00:36:40.894827   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.894836   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:40.894845   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:40.894894   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:40.932233   79191 cri.go:89] found id: ""
	I0816 00:36:40.932255   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.932263   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:40.932268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:40.932324   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:40.974601   79191 cri.go:89] found id: ""
	I0816 00:36:40.974624   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.974633   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:40.974642   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:40.974660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:41.049185   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:41.049209   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:41.049223   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:41.129446   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:41.129481   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:41.170312   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:41.170341   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:41.226217   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:41.226254   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:36.995268   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:39.494273   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:41.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.426501   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.926122   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.877227   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.878644   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:43.741485   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:43.756248   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:43.756325   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:43.792440   79191 cri.go:89] found id: ""
	I0816 00:36:43.792469   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.792480   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:43.792488   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:43.792549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:43.829906   79191 cri.go:89] found id: ""
	I0816 00:36:43.829933   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.829941   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:43.829947   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:43.830003   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:43.880305   79191 cri.go:89] found id: ""
	I0816 00:36:43.880330   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.880337   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:43.880343   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:43.880399   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:43.937899   79191 cri.go:89] found id: ""
	I0816 00:36:43.937929   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.937939   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:43.937953   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:43.938023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:43.997578   79191 cri.go:89] found id: ""
	I0816 00:36:43.997603   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.997610   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:43.997620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:43.997672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:44.035606   79191 cri.go:89] found id: ""
	I0816 00:36:44.035629   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.035637   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:44.035643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:44.035692   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:44.072919   79191 cri.go:89] found id: ""
	I0816 00:36:44.072950   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.072961   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:44.072968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:44.073043   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:44.108629   79191 cri.go:89] found id: ""
	I0816 00:36:44.108659   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.108681   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:44.108692   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:44.108705   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:44.149127   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:44.149151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:44.201694   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:44.201737   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:44.217161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:44.217199   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:44.284335   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:44.284362   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:44.284379   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:43.996478   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.494382   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:44.926542   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.926713   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:45.376030   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:47.875418   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.877201   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.869196   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:46.883519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:46.883584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:46.924767   79191 cri.go:89] found id: ""
	I0816 00:36:46.924806   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.924821   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:46.924829   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:46.924889   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:46.963282   79191 cri.go:89] found id: ""
	I0816 00:36:46.963309   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.963320   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:46.963327   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:46.963389   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:47.001421   79191 cri.go:89] found id: ""
	I0816 00:36:47.001450   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.001458   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:47.001463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:47.001518   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:47.037679   79191 cri.go:89] found id: ""
	I0816 00:36:47.037702   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.037713   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:47.037720   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:47.037778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:47.078009   79191 cri.go:89] found id: ""
	I0816 00:36:47.078039   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.078050   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:47.078056   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:47.078113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:47.119032   79191 cri.go:89] found id: ""
	I0816 00:36:47.119056   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.119064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:47.119069   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:47.119127   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:47.154893   79191 cri.go:89] found id: ""
	I0816 00:36:47.154919   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.154925   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:47.154933   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:47.154993   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:47.194544   79191 cri.go:89] found id: ""
	I0816 00:36:47.194571   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.194582   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:47.194592   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:47.194612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:47.267148   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:47.267172   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:47.267186   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:47.345257   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:47.345295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:47.386207   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:47.386233   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:47.436171   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:47.436201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:49.949977   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:49.965702   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:49.965761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:50.002443   79191 cri.go:89] found id: ""
	I0816 00:36:50.002470   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.002481   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:50.002489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:50.002548   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:50.039123   79191 cri.go:89] found id: ""
	I0816 00:36:50.039155   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.039162   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:50.039168   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:50.039220   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:50.074487   79191 cri.go:89] found id: ""
	I0816 00:36:50.074517   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.074527   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:50.074535   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:50.074593   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:50.108980   79191 cri.go:89] found id: ""
	I0816 00:36:50.109008   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.109018   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:50.109025   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:50.109082   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:50.149182   79191 cri.go:89] found id: ""
	I0816 00:36:50.149202   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.149209   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:50.149215   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:50.149261   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:50.183066   79191 cri.go:89] found id: ""
	I0816 00:36:50.183094   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.183102   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:50.183108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:50.183165   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:50.220200   79191 cri.go:89] found id: ""
	I0816 00:36:50.220231   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.220240   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:50.220246   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:50.220302   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:50.258059   79191 cri.go:89] found id: ""
	I0816 00:36:50.258083   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.258092   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:50.258100   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:50.258110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:50.300560   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:50.300591   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:50.350548   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:50.350581   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:50.364792   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:50.364816   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:50.437723   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:50.437746   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:50.437761   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:48.995009   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:50.995542   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.425926   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:51.427896   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.926363   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:52.375826   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:54.876435   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.015846   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:53.029184   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:53.029246   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:53.064306   79191 cri.go:89] found id: ""
	I0816 00:36:53.064338   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.064346   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:53.064352   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:53.064404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:53.104425   79191 cri.go:89] found id: ""
	I0816 00:36:53.104458   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.104468   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:53.104476   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:53.104538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:53.139470   79191 cri.go:89] found id: ""
	I0816 00:36:53.139493   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.139500   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:53.139506   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:53.139551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:53.185195   79191 cri.go:89] found id: ""
	I0816 00:36:53.185225   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.185234   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:53.185242   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:53.185300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:53.221897   79191 cri.go:89] found id: ""
	I0816 00:36:53.221925   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.221935   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:53.221943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:53.222006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:53.258810   79191 cri.go:89] found id: ""
	I0816 00:36:53.258841   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.258852   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:53.258859   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:53.258924   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:53.298672   79191 cri.go:89] found id: ""
	I0816 00:36:53.298701   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.298711   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:53.298719   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:53.298778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:53.333498   79191 cri.go:89] found id: ""
	I0816 00:36:53.333520   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.333527   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:53.333535   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:53.333548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.370495   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:53.370530   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:53.423938   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:53.423982   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:53.438897   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:53.438926   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:53.505951   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:53.505973   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:53.505987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.089638   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:56.103832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:56.103893   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:56.148010   79191 cri.go:89] found id: ""
	I0816 00:36:56.148038   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.148048   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:56.148057   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:56.148120   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:56.185631   79191 cri.go:89] found id: ""
	I0816 00:36:56.185663   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.185673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:56.185680   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:56.185739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:56.222064   79191 cri.go:89] found id: ""
	I0816 00:36:56.222093   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.222104   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:56.222112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:56.222162   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:56.260462   79191 cri.go:89] found id: ""
	I0816 00:36:56.260494   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.260504   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:56.260513   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:56.260574   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:56.296125   79191 cri.go:89] found id: ""
	I0816 00:36:56.296154   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.296164   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:56.296172   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:56.296236   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:56.333278   79191 cri.go:89] found id: ""
	I0816 00:36:56.333305   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.333316   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:56.333324   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:56.333385   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:56.368924   79191 cri.go:89] found id: ""
	I0816 00:36:56.368952   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.368962   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:56.368970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:56.369034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:56.407148   79191 cri.go:89] found id: ""
	I0816 00:36:56.407180   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.407190   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:56.407201   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:56.407215   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:56.464745   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:56.464779   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:56.478177   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:56.478204   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:56.555827   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:56.555851   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:56.555864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.640001   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:56.640040   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.495546   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.994786   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:58.426865   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:57.376484   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.876765   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.181423   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:59.195722   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:59.195804   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:59.232043   79191 cri.go:89] found id: ""
	I0816 00:36:59.232067   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.232075   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:59.232081   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:59.232132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:59.270628   79191 cri.go:89] found id: ""
	I0816 00:36:59.270656   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.270673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:59.270681   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:59.270743   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:59.304054   79191 cri.go:89] found id: ""
	I0816 00:36:59.304089   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.304100   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:59.304108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:59.304169   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:59.339386   79191 cri.go:89] found id: ""
	I0816 00:36:59.339410   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.339417   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:59.339423   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:59.339483   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:59.381313   79191 cri.go:89] found id: ""
	I0816 00:36:59.381361   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.381376   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:59.381385   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:59.381449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:59.417060   79191 cri.go:89] found id: ""
	I0816 00:36:59.417090   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.417101   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:59.417109   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:59.417160   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:59.461034   79191 cri.go:89] found id: ""
	I0816 00:36:59.461060   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.461071   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:59.461078   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:59.461136   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:59.496248   79191 cri.go:89] found id: ""
	I0816 00:36:59.496276   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.496286   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:59.496297   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:59.496312   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:59.566779   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:59.566803   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:59.566829   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:59.651999   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:59.652034   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:59.693286   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:59.693310   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:59.746677   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:59.746711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:58.494370   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.494959   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.927036   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:03.425008   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.376921   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.876676   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.262527   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:02.277903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:02.277965   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:02.323846   79191 cri.go:89] found id: ""
	I0816 00:37:02.323868   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.323876   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:02.323882   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:02.323938   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:02.359552   79191 cri.go:89] found id: ""
	I0816 00:37:02.359578   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.359589   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:02.359596   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:02.359657   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:02.395062   79191 cri.go:89] found id: ""
	I0816 00:37:02.395087   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.395094   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:02.395100   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:02.395155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:02.432612   79191 cri.go:89] found id: ""
	I0816 00:37:02.432636   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.432646   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:02.432654   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:02.432712   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:02.468612   79191 cri.go:89] found id: ""
	I0816 00:37:02.468640   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.468651   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:02.468659   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:02.468716   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:02.514472   79191 cri.go:89] found id: ""
	I0816 00:37:02.514500   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.514511   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:02.514519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:02.514576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:02.551964   79191 cri.go:89] found id: ""
	I0816 00:37:02.551993   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.552003   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:02.552011   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:02.552061   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:02.588018   79191 cri.go:89] found id: ""
	I0816 00:37:02.588044   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.588053   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:02.588063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:02.588081   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:02.638836   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:02.638875   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:02.653581   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:02.653613   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:02.737018   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.737047   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:02.737065   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:02.819726   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:02.819763   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.364943   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:05.379433   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:05.379492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:05.419165   79191 cri.go:89] found id: ""
	I0816 00:37:05.419191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.419198   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:05.419204   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:05.419264   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:05.454417   79191 cri.go:89] found id: ""
	I0816 00:37:05.454438   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.454446   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:05.454452   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:05.454497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:05.490162   79191 cri.go:89] found id: ""
	I0816 00:37:05.490191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.490203   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:05.490210   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:05.490268   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:05.527303   79191 cri.go:89] found id: ""
	I0816 00:37:05.527327   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.527334   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:05.527340   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:05.527393   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:05.562271   79191 cri.go:89] found id: ""
	I0816 00:37:05.562302   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.562310   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:05.562316   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:05.562374   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:05.597800   79191 cri.go:89] found id: ""
	I0816 00:37:05.597823   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.597830   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:05.597837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:05.597905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:05.633996   79191 cri.go:89] found id: ""
	I0816 00:37:05.634021   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.634028   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:05.634034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:05.634088   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:05.672408   79191 cri.go:89] found id: ""
	I0816 00:37:05.672437   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.672446   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:05.672457   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:05.672472   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:05.750956   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:05.750995   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.795573   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:05.795603   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:05.848560   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:05.848593   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:05.862245   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:05.862268   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:05.938704   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.495728   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.994839   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:05.425507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:07.426459   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:06.877664   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.375601   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:08.439692   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:08.452850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:08.452927   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:08.490015   79191 cri.go:89] found id: ""
	I0816 00:37:08.490043   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.490053   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:08.490060   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:08.490121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:08.529631   79191 cri.go:89] found id: ""
	I0816 00:37:08.529665   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.529676   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:08.529689   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:08.529747   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:08.564858   79191 cri.go:89] found id: ""
	I0816 00:37:08.564885   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.564896   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:08.564904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:08.564966   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:08.601144   79191 cri.go:89] found id: ""
	I0816 00:37:08.601180   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.601190   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:08.601200   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:08.601257   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:08.637050   79191 cri.go:89] found id: ""
	I0816 00:37:08.637081   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.637090   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:08.637098   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:08.637158   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:08.670613   79191 cri.go:89] found id: ""
	I0816 00:37:08.670644   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.670655   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:08.670663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:08.670727   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:08.704664   79191 cri.go:89] found id: ""
	I0816 00:37:08.704690   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.704698   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:08.704704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:08.704754   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:08.741307   79191 cri.go:89] found id: ""
	I0816 00:37:08.741337   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.741348   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:08.741360   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:08.741374   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:08.755434   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:08.755459   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:08.828118   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:08.828140   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:08.828151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:08.911565   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:08.911605   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:08.954907   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:08.954937   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.508848   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:11.521998   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:11.522060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:11.558581   79191 cri.go:89] found id: ""
	I0816 00:37:11.558611   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.558622   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:11.558630   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:11.558697   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:11.593798   79191 cri.go:89] found id: ""
	I0816 00:37:11.593822   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.593830   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:11.593836   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:11.593905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:11.629619   79191 cri.go:89] found id: ""
	I0816 00:37:11.629648   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.629658   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:11.629664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:11.629717   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:11.666521   79191 cri.go:89] found id: ""
	I0816 00:37:11.666548   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.666556   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:11.666562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:11.666607   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:11.703374   79191 cri.go:89] found id: ""
	I0816 00:37:11.703406   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.703417   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:11.703427   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:11.703491   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:11.739374   79191 cri.go:89] found id: ""
	I0816 00:37:11.739403   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.739413   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:11.739420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:11.739475   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:11.774981   79191 cri.go:89] found id: ""
	I0816 00:37:11.775006   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.775013   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:11.775019   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:11.775074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:06.995675   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.495024   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:12.428179   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.377241   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.875723   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.809561   79191 cri.go:89] found id: ""
	I0816 00:37:11.809590   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.809601   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:11.809612   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:11.809626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.863071   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:11.863116   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:11.878161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:11.878191   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:11.953572   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.953594   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:11.953608   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:12.035815   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:12.035848   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:14.576547   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:14.590747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:14.590802   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:14.626732   79191 cri.go:89] found id: ""
	I0816 00:37:14.626762   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.626774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:14.626781   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:14.626833   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:14.662954   79191 cri.go:89] found id: ""
	I0816 00:37:14.662978   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.662988   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:14.662996   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:14.663057   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:14.697618   79191 cri.go:89] found id: ""
	I0816 00:37:14.697646   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.697656   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:14.697663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:14.697725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:14.735137   79191 cri.go:89] found id: ""
	I0816 00:37:14.735161   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.735168   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:14.735174   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:14.735222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:14.770625   79191 cri.go:89] found id: ""
	I0816 00:37:14.770648   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.770655   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:14.770660   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:14.770718   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:14.808678   79191 cri.go:89] found id: ""
	I0816 00:37:14.808708   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.808718   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:14.808726   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:14.808795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:14.847321   79191 cri.go:89] found id: ""
	I0816 00:37:14.847349   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.847360   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:14.847368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:14.847425   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:14.886110   79191 cri.go:89] found id: ""
	I0816 00:37:14.886136   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.886147   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:14.886156   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:14.886175   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:14.971978   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:14.972013   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:15.015620   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:15.015644   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:15.067372   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:15.067405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:15.081629   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:15.081652   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:15.151580   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.995551   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.995831   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.495016   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:14.926297   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.926367   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:18.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:15.876514   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.877987   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.652362   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:17.666201   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:17.666278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:17.698723   79191 cri.go:89] found id: ""
	I0816 00:37:17.698760   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.698772   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:17.698778   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:17.698827   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:17.732854   79191 cri.go:89] found id: ""
	I0816 00:37:17.732883   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.732893   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:17.732901   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:17.732957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:17.767665   79191 cri.go:89] found id: ""
	I0816 00:37:17.767691   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.767701   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:17.767709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:17.767769   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:17.801490   79191 cri.go:89] found id: ""
	I0816 00:37:17.801512   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.801520   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:17.801526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:17.801579   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:17.837451   79191 cri.go:89] found id: ""
	I0816 00:37:17.837479   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.837490   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:17.837498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:17.837562   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:17.872898   79191 cri.go:89] found id: ""
	I0816 00:37:17.872924   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.872934   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:17.872943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:17.873002   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:17.910325   79191 cri.go:89] found id: ""
	I0816 00:37:17.910352   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.910362   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:17.910370   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:17.910431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:17.946885   79191 cri.go:89] found id: ""
	I0816 00:37:17.946909   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.946916   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:17.946923   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:17.946935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:18.014011   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:18.014045   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:18.028850   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:18.028886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:18.099362   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:18.099381   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:18.099396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:18.180552   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:18.180588   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:20.720810   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:20.733806   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:20.733887   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:20.771300   79191 cri.go:89] found id: ""
	I0816 00:37:20.771323   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.771330   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:20.771336   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:20.771394   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:20.812327   79191 cri.go:89] found id: ""
	I0816 00:37:20.812355   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.812362   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:20.812369   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:20.812430   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:20.846830   79191 cri.go:89] found id: ""
	I0816 00:37:20.846861   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.846872   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:20.846879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:20.846948   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:20.889979   79191 cri.go:89] found id: ""
	I0816 00:37:20.890005   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.890015   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:20.890023   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:20.890086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:20.933732   79191 cri.go:89] found id: ""
	I0816 00:37:20.933762   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.933772   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:20.933778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:20.933824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:20.972341   79191 cri.go:89] found id: ""
	I0816 00:37:20.972368   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.972376   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:20.972382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:20.972444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:21.011179   79191 cri.go:89] found id: ""
	I0816 00:37:21.011207   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.011216   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:21.011224   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:21.011282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:21.045645   79191 cri.go:89] found id: ""
	I0816 00:37:21.045668   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.045675   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:21.045684   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:21.045694   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:21.099289   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:21.099321   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:21.113814   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:21.113858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:21.186314   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:21.186337   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:21.186355   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:21.271116   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:21.271152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:18.994476   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.996435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:21.425187   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.425456   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.377999   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:22.877014   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.818598   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:23.832330   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:23.832387   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:23.869258   79191 cri.go:89] found id: ""
	I0816 00:37:23.869279   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.869286   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:23.869293   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:23.869342   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:23.903958   79191 cri.go:89] found id: ""
	I0816 00:37:23.903989   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.903999   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:23.904006   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:23.904060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:23.943110   79191 cri.go:89] found id: ""
	I0816 00:37:23.943142   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.943153   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:23.943160   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:23.943222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:23.979325   79191 cri.go:89] found id: ""
	I0816 00:37:23.979356   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.979366   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:23.979374   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:23.979435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:24.017570   79191 cri.go:89] found id: ""
	I0816 00:37:24.017597   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.017607   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:24.017614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:24.017684   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:24.051522   79191 cri.go:89] found id: ""
	I0816 00:37:24.051546   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.051555   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:24.051562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:24.051626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:24.087536   79191 cri.go:89] found id: ""
	I0816 00:37:24.087561   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.087572   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:24.087579   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:24.087644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:24.123203   79191 cri.go:89] found id: ""
	I0816 00:37:24.123233   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.123245   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:24.123256   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:24.123276   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:24.178185   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:24.178225   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:24.192895   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:24.192920   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:24.273471   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:24.273492   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:24.273504   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:24.357890   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:24.357936   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:23.495269   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.994859   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.427328   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.927068   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.376932   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.377168   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:29.876182   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:26.950399   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:26.964347   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:26.964406   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:27.004694   79191 cri.go:89] found id: ""
	I0816 00:37:27.004722   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.004738   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:27.004745   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:27.004800   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:27.040051   79191 cri.go:89] found id: ""
	I0816 00:37:27.040080   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.040090   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:27.040096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:27.040144   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:27.088614   79191 cri.go:89] found id: ""
	I0816 00:37:27.088642   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.088651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:27.088657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:27.088732   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:27.125427   79191 cri.go:89] found id: ""
	I0816 00:37:27.125450   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.125457   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:27.125464   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:27.125511   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:27.158562   79191 cri.go:89] found id: ""
	I0816 00:37:27.158592   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.158602   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:27.158609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:27.158672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:27.192986   79191 cri.go:89] found id: ""
	I0816 00:37:27.193015   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.193026   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:27.193034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:27.193091   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:27.228786   79191 cri.go:89] found id: ""
	I0816 00:37:27.228828   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.228847   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:27.228858   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:27.228921   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:27.262776   79191 cri.go:89] found id: ""
	I0816 00:37:27.262808   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.262819   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:27.262829   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:27.262844   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:27.276444   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:27.276470   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:27.349918   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:27.349946   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:27.349958   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:27.435030   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:27.435061   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:27.484043   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:27.484069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.038376   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:30.051467   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:30.051530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:30.086346   79191 cri.go:89] found id: ""
	I0816 00:37:30.086376   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.086386   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:30.086394   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:30.086454   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:30.127665   79191 cri.go:89] found id: ""
	I0816 00:37:30.127691   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.127699   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:30.127704   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:30.127757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:30.169901   79191 cri.go:89] found id: ""
	I0816 00:37:30.169929   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.169939   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:30.169950   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:30.170013   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:30.212501   79191 cri.go:89] found id: ""
	I0816 00:37:30.212523   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.212530   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:30.212537   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:30.212584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:30.256560   79191 cri.go:89] found id: ""
	I0816 00:37:30.256583   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.256591   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:30.256597   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:30.256646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:30.291062   79191 cri.go:89] found id: ""
	I0816 00:37:30.291086   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.291093   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:30.291099   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:30.291143   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:30.328325   79191 cri.go:89] found id: ""
	I0816 00:37:30.328353   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.328361   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:30.328368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:30.328415   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:30.364946   79191 cri.go:89] found id: ""
	I0816 00:37:30.364972   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.364981   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:30.364991   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:30.365005   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:30.408090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:30.408117   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.463421   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:30.463456   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:30.479679   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:30.479711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:30.555394   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:30.555416   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:30.555432   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:28.494477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.494598   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.427146   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:32.926282   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:31.877446   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.376145   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:33.137366   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:33.150970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:33.151030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:33.191020   79191 cri.go:89] found id: ""
	I0816 00:37:33.191047   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.191055   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:33.191061   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:33.191112   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:33.227971   79191 cri.go:89] found id: ""
	I0816 00:37:33.228022   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.228030   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:33.228038   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:33.228089   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:33.265036   79191 cri.go:89] found id: ""
	I0816 00:37:33.265065   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.265074   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:33.265079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:33.265126   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:33.300385   79191 cri.go:89] found id: ""
	I0816 00:37:33.300411   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.300418   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:33.300425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:33.300487   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:33.335727   79191 cri.go:89] found id: ""
	I0816 00:37:33.335757   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.335768   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:33.335776   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:33.335839   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:33.373458   79191 cri.go:89] found id: ""
	I0816 00:37:33.373489   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.373500   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:33.373507   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:33.373568   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:33.410380   79191 cri.go:89] found id: ""
	I0816 00:37:33.410404   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.410413   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:33.410420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:33.410480   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:33.451007   79191 cri.go:89] found id: ""
	I0816 00:37:33.451030   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.451040   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:33.451049   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:33.451062   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:33.502215   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:33.502249   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:33.516123   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:33.516152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:33.590898   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:33.590921   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:33.590944   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:33.668404   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:33.668455   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:36.209671   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:36.223498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:36.223561   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:36.258980   79191 cri.go:89] found id: ""
	I0816 00:37:36.259041   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.259056   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:36.259064   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:36.259123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:36.293659   79191 cri.go:89] found id: ""
	I0816 00:37:36.293687   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.293694   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:36.293703   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:36.293761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:36.331729   79191 cri.go:89] found id: ""
	I0816 00:37:36.331756   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.331766   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:36.331773   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:36.331830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:36.368441   79191 cri.go:89] found id: ""
	I0816 00:37:36.368470   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.368479   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:36.368486   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:36.368533   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:36.405338   79191 cri.go:89] found id: ""
	I0816 00:37:36.405368   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.405380   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:36.405389   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:36.405448   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:36.441986   79191 cri.go:89] found id: ""
	I0816 00:37:36.442018   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.442029   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:36.442038   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:36.442097   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:36.478102   79191 cri.go:89] found id: ""
	I0816 00:37:36.478183   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.478197   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:36.478206   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:36.478269   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:36.517138   79191 cri.go:89] found id: ""
	I0816 00:37:36.517167   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.517178   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:36.517190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:36.517205   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:36.570009   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:36.570042   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:36.583534   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:36.583565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:36.651765   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:36.651794   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:36.651808   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:36.732836   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:36.732870   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:32.495090   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.996253   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.926615   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:37.425790   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:36.377305   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:38.876443   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.274490   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:39.288528   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:39.288591   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:39.325560   79191 cri.go:89] found id: ""
	I0816 00:37:39.325582   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.325589   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:39.325599   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:39.325656   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:39.365795   79191 cri.go:89] found id: ""
	I0816 00:37:39.365822   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.365829   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:39.365837   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:39.365906   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:39.404933   79191 cri.go:89] found id: ""
	I0816 00:37:39.404961   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.404971   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:39.404977   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:39.405041   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:39.442712   79191 cri.go:89] found id: ""
	I0816 00:37:39.442736   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.442747   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:39.442754   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:39.442814   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:39.484533   79191 cri.go:89] found id: ""
	I0816 00:37:39.484557   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.484566   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:39.484573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:39.484636   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:39.522089   79191 cri.go:89] found id: ""
	I0816 00:37:39.522115   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.522125   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:39.522133   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:39.522194   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:39.557099   79191 cri.go:89] found id: ""
	I0816 00:37:39.557128   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.557138   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:39.557145   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:39.557205   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:39.594809   79191 cri.go:89] found id: ""
	I0816 00:37:39.594838   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.594849   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:39.594859   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:39.594874   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:39.611079   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:39.611110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:39.683156   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:39.683182   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:39.683198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:39.761198   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:39.761235   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:39.800972   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:39.801003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:37.494553   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.495854   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.427910   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.926445   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.376128   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:43.377791   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:42.354816   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:42.368610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:42.368673   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:42.404716   79191 cri.go:89] found id: ""
	I0816 00:37:42.404738   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.404745   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:42.404753   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:42.404798   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:42.441619   79191 cri.go:89] found id: ""
	I0816 00:37:42.441649   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.441660   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:42.441667   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:42.441726   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:42.480928   79191 cri.go:89] found id: ""
	I0816 00:37:42.480965   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.480976   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:42.480983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:42.481051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:42.519187   79191 cri.go:89] found id: ""
	I0816 00:37:42.519216   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.519226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:42.519234   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:42.519292   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:42.554928   79191 cri.go:89] found id: ""
	I0816 00:37:42.554956   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.554967   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:42.554974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:42.555035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:42.593436   79191 cri.go:89] found id: ""
	I0816 00:37:42.593472   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.593481   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:42.593487   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:42.593545   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:42.628078   79191 cri.go:89] found id: ""
	I0816 00:37:42.628101   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.628108   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:42.628113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:42.628172   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:42.662824   79191 cri.go:89] found id: ""
	I0816 00:37:42.662852   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.662862   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:42.662871   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:42.662888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:42.677267   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:42.677290   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:42.749570   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:42.749599   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:42.749615   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:42.831177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:42.831213   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:42.871928   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:42.871957   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.430704   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:45.444400   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:45.444461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:45.479503   79191 cri.go:89] found id: ""
	I0816 00:37:45.479529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.479537   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:45.479543   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:45.479596   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:45.518877   79191 cri.go:89] found id: ""
	I0816 00:37:45.518907   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.518917   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:45.518925   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:45.518992   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:45.553936   79191 cri.go:89] found id: ""
	I0816 00:37:45.553966   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.553977   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:45.553984   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:45.554035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:45.593054   79191 cri.go:89] found id: ""
	I0816 00:37:45.593081   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.593088   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:45.593095   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:45.593147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:45.631503   79191 cri.go:89] found id: ""
	I0816 00:37:45.631529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.631537   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:45.631543   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:45.631599   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:45.667435   79191 cri.go:89] found id: ""
	I0816 00:37:45.667459   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.667466   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:45.667473   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:45.667529   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:45.702140   79191 cri.go:89] found id: ""
	I0816 00:37:45.702168   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.702179   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:45.702187   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:45.702250   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:45.736015   79191 cri.go:89] found id: ""
	I0816 00:37:45.736048   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.736059   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:45.736070   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:45.736085   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:45.817392   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:45.817427   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:45.856421   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:45.856451   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.912429   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:45.912476   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:45.928411   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:45.928435   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:46.001141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:41.995835   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.497033   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.426414   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:46.927720   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:45.876721   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:47.877185   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.877396   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.501317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:48.515114   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:48.515190   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:48.553776   79191 cri.go:89] found id: ""
	I0816 00:37:48.553802   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.553810   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:48.553816   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:48.553890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:48.589760   79191 cri.go:89] found id: ""
	I0816 00:37:48.589786   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.589794   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:48.589800   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:48.589871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:48.629792   79191 cri.go:89] found id: ""
	I0816 00:37:48.629816   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.629825   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:48.629833   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:48.629898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:48.668824   79191 cri.go:89] found id: ""
	I0816 00:37:48.668852   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.668860   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:48.668866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:48.668930   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:48.704584   79191 cri.go:89] found id: ""
	I0816 00:37:48.704615   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.704626   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:48.704634   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:48.704691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:48.738833   79191 cri.go:89] found id: ""
	I0816 00:37:48.738855   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.738863   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:48.738868   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:48.738928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:48.774943   79191 cri.go:89] found id: ""
	I0816 00:37:48.774972   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.774981   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:48.774989   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:48.775051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:48.808802   79191 cri.go:89] found id: ""
	I0816 00:37:48.808825   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.808832   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:48.808841   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:48.808856   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:48.858849   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:48.858880   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:48.873338   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:48.873369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:48.950172   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:48.950195   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:48.950209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:49.038642   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:49.038679   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:51.581947   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:51.596612   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:51.596691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:51.631468   79191 cri.go:89] found id: ""
	I0816 00:37:51.631498   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.631509   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:51.631517   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:51.631577   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:51.666922   79191 cri.go:89] found id: ""
	I0816 00:37:51.666953   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.666963   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:51.666971   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:51.667034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:51.707081   79191 cri.go:89] found id: ""
	I0816 00:37:51.707109   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.707116   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:51.707122   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:51.707189   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:51.743884   79191 cri.go:89] found id: ""
	I0816 00:37:51.743912   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.743925   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:51.743932   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:51.743990   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:51.779565   79191 cri.go:89] found id: ""
	I0816 00:37:51.779595   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.779603   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:51.779610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:51.779658   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:46.994211   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.995446   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.495519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.426703   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.426947   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:53.427050   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:52.377050   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.877759   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.818800   79191 cri.go:89] found id: ""
	I0816 00:37:51.818824   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.818831   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:51.818837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:51.818899   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:51.855343   79191 cri.go:89] found id: ""
	I0816 00:37:51.855367   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.855374   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:51.855380   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:51.855426   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:51.890463   79191 cri.go:89] found id: ""
	I0816 00:37:51.890496   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.890505   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:51.890513   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:51.890526   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:51.977168   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:51.977209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:52.021626   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:52.021660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:52.076983   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:52.077027   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:52.092111   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:52.092142   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:52.172738   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:54.673192   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:54.688780   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.688853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.725279   79191 cri.go:89] found id: ""
	I0816 00:37:54.725308   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.725318   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:54.725325   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.725383   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:54.764326   79191 cri.go:89] found id: ""
	I0816 00:37:54.764353   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.764364   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:54.764372   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:54.764423   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:54.805221   79191 cri.go:89] found id: ""
	I0816 00:37:54.805252   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.805263   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:54.805270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:54.805334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:54.849724   79191 cri.go:89] found id: ""
	I0816 00:37:54.849750   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.849759   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:54.849765   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:54.849824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:54.894438   79191 cri.go:89] found id: ""
	I0816 00:37:54.894460   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.894468   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:54.894475   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:54.894532   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:54.933400   79191 cri.go:89] found id: ""
	I0816 00:37:54.933422   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.933431   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:54.933439   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:54.933497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:54.982249   79191 cri.go:89] found id: ""
	I0816 00:37:54.982277   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.982286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:54.982294   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:54.982353   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:55.024431   79191 cri.go:89] found id: ""
	I0816 00:37:55.024458   79191 logs.go:276] 0 containers: []
	W0816 00:37:55.024469   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:55.024479   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.024499   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:55.107089   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:55.107119   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:55.148949   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.148981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.202865   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.202902   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.218528   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.218556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:55.304995   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:53.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:55.995483   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.926671   78713 pod_ready.go:82] duration metric: took 4m0.007058537s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:37:54.926700   78713 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:37:54.926711   78713 pod_ready.go:39] duration metric: took 4m7.919515966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:37:54.926728   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:37:54.926764   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.926821   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.983024   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:54.983043   78713 cri.go:89] found id: ""
	I0816 00:37:54.983052   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:54.983103   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:54.988579   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.988644   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:55.035200   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.035231   78713 cri.go:89] found id: ""
	I0816 00:37:55.035241   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:55.035291   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.040701   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:55.040777   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:55.087306   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.087330   78713 cri.go:89] found id: ""
	I0816 00:37:55.087340   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:55.087422   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.092492   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:55.092560   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:55.144398   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.144424   78713 cri.go:89] found id: ""
	I0816 00:37:55.144433   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:55.144494   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.149882   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:55.149953   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:55.193442   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.193464   78713 cri.go:89] found id: ""
	I0816 00:37:55.193472   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:55.193528   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.198812   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:55.198886   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:55.238634   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.238656   78713 cri.go:89] found id: ""
	I0816 00:37:55.238666   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:55.238729   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.243141   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:55.243229   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:55.281414   78713 cri.go:89] found id: ""
	I0816 00:37:55.281439   78713 logs.go:276] 0 containers: []
	W0816 00:37:55.281449   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:55.281457   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:55.281519   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:55.319336   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.319357   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.319363   78713 cri.go:89] found id: ""
	I0816 00:37:55.319371   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:55.319431   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.323837   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.328777   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:55.328801   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.376259   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:55.376290   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.419553   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:55.419584   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.476026   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.476058   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.544263   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.544297   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.561818   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.561858   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:55.701342   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:55.701375   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:55.746935   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:55.746968   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.787200   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:55.787234   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.825257   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:55.825282   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.865569   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:55.865594   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.905234   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.905269   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:56.391175   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:56.391208   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:58.943163   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:58.961551   78713 api_server.go:72] duration metric: took 4m17.689832084s to wait for apiserver process to appear ...
	I0816 00:37:58.961592   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:37:58.961630   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:58.961697   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:59.001773   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.001794   78713 cri.go:89] found id: ""
	I0816 00:37:59.001803   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:59.001876   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.006168   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:59.006222   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:59.041625   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.041647   78713 cri.go:89] found id: ""
	I0816 00:37:59.041654   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:59.041715   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.046258   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:59.046323   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:59.086070   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.086089   78713 cri.go:89] found id: ""
	I0816 00:37:59.086097   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:59.086151   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.090556   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:59.090626   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:59.129889   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.129931   78713 cri.go:89] found id: ""
	I0816 00:37:59.129942   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:59.130008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.135694   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:59.135775   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.375656   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.375979   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:57.805335   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:57.819904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:57.819989   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:57.856119   79191 cri.go:89] found id: ""
	I0816 00:37:57.856146   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.856153   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:57.856160   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:57.856217   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:57.892797   79191 cri.go:89] found id: ""
	I0816 00:37:57.892825   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.892833   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:57.892841   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:57.892905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:57.928753   79191 cri.go:89] found id: ""
	I0816 00:37:57.928784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.928795   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:57.928803   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:57.928884   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:57.963432   79191 cri.go:89] found id: ""
	I0816 00:37:57.963462   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.963474   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:57.963481   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:57.963538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.998759   79191 cri.go:89] found id: ""
	I0816 00:37:57.998784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.998793   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:57.998801   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:57.998886   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:58.035262   79191 cri.go:89] found id: ""
	I0816 00:37:58.035288   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.035296   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:58.035303   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:58.035358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:58.071052   79191 cri.go:89] found id: ""
	I0816 00:37:58.071079   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.071087   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:58.071092   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:58.071150   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:58.110047   79191 cri.go:89] found id: ""
	I0816 00:37:58.110074   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.110083   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:58.110090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:58.110101   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:58.164792   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:58.164823   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:58.178742   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:58.178770   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:58.251861   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:58.251899   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:58.251921   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:58.329805   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:58.329859   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:00.872911   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:00.887914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:00.887986   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:00.925562   79191 cri.go:89] found id: ""
	I0816 00:38:00.925595   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.925606   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:00.925615   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:00.925669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:00.961476   79191 cri.go:89] found id: ""
	I0816 00:38:00.961498   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.961505   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:00.961510   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:00.961554   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:00.997575   79191 cri.go:89] found id: ""
	I0816 00:38:00.997599   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.997608   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:00.997616   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:00.997677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:01.035130   79191 cri.go:89] found id: ""
	I0816 00:38:01.035158   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.035169   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:01.035177   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:01.035232   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:01.073768   79191 cri.go:89] found id: ""
	I0816 00:38:01.073800   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.073811   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:01.073819   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:01.073898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:01.107904   79191 cri.go:89] found id: ""
	I0816 00:38:01.107928   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.107937   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:01.107943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:01.108004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:01.142654   79191 cri.go:89] found id: ""
	I0816 00:38:01.142690   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.142701   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:01.142709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:01.142766   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:01.187565   79191 cri.go:89] found id: ""
	I0816 00:38:01.187599   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.187610   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:01.187621   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:01.187635   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:01.265462   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:01.265493   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:01.265508   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:01.346988   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:01.347020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:01.390977   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:01.391006   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.443858   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:01.443892   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:57.996188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:00.495210   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.176702   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.176728   78713 cri.go:89] found id: ""
	I0816 00:37:59.176738   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:59.176799   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.182305   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:59.182387   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:59.223938   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.223960   78713 cri.go:89] found id: ""
	I0816 00:37:59.223968   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:59.224023   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.228818   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:59.228884   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:59.264566   78713 cri.go:89] found id: ""
	I0816 00:37:59.264589   78713 logs.go:276] 0 containers: []
	W0816 00:37:59.264597   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:59.264606   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:59.264654   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:59.302534   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.302560   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.302565   78713 cri.go:89] found id: ""
	I0816 00:37:59.302574   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:59.302621   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.307021   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.311258   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:59.311299   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:59.425542   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:59.425574   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.466078   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:59.466107   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:59.480894   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:59.480925   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.524790   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:59.524822   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.568832   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:59.568862   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.619399   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:59.619433   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.658616   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:59.658645   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.720421   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:59.720469   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.756558   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:59.756586   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.798650   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:59.798674   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:59.864280   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:59.864323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:59.913086   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:59.913118   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:02.828194   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:38:02.832896   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:38:02.834035   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:02.834059   78713 api_server.go:131] duration metric: took 3.87246001s to wait for apiserver health ...
	I0816 00:38:02.834067   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:02.834089   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:02.834145   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:02.873489   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:02.873512   78713 cri.go:89] found id: ""
	I0816 00:38:02.873521   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:38:02.873577   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.878807   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:02.878883   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:02.919930   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:02.919949   78713 cri.go:89] found id: ""
	I0816 00:38:02.919957   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:38:02.920008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.924459   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:02.924525   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:02.964609   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:02.964636   78713 cri.go:89] found id: ""
	I0816 00:38:02.964644   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:38:02.964697   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.968808   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:02.968921   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:03.017177   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.017201   78713 cri.go:89] found id: ""
	I0816 00:38:03.017210   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:38:03.017275   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.021905   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:03.021992   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:03.061720   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.061741   78713 cri.go:89] found id: ""
	I0816 00:38:03.061748   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:38:03.061801   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.066149   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:03.066206   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:03.107130   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.107149   78713 cri.go:89] found id: ""
	I0816 00:38:03.107156   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:38:03.107213   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.111323   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:03.111372   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:03.149906   78713 cri.go:89] found id: ""
	I0816 00:38:03.149927   78713 logs.go:276] 0 containers: []
	W0816 00:38:03.149934   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:03.149940   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:03.150000   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:03.190981   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.191007   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.191011   78713 cri.go:89] found id: ""
	I0816 00:38:03.191018   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:38:03.191066   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.195733   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.199755   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:03.199775   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:03.302209   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:38:03.302239   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:03.352505   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:38:03.352548   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.392296   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:38:03.392323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.448092   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:38:03.448130   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.487516   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:38:03.487541   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:03.541954   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:03.541989   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:03.557026   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:38:03.557049   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:03.602639   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:38:03.602670   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:03.642706   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:38:03.642733   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.683504   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:38:03.683530   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.721802   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:03.721826   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.089579   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.089621   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.376613   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:03.376837   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:06.679744   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:06.679797   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.679805   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.679812   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.679819   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.679825   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.679849   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.679861   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.679869   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.679878   78713 system_pods.go:74] duration metric: took 3.845804999s to wait for pod list to return data ...
	I0816 00:38:06.679886   78713 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:06.682521   78713 default_sa.go:45] found service account: "default"
	I0816 00:38:06.682553   78713 default_sa.go:55] duration metric: took 2.660224ms for default service account to be created ...
	I0816 00:38:06.682565   78713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:06.688149   78713 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:06.688178   78713 system_pods.go:89] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.688183   78713 system_pods.go:89] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.688187   78713 system_pods.go:89] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.688192   78713 system_pods.go:89] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.688196   78713 system_pods.go:89] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.688199   78713 system_pods.go:89] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.688206   78713 system_pods.go:89] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.688213   78713 system_pods.go:89] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.688220   78713 system_pods.go:126] duration metric: took 5.649758ms to wait for k8s-apps to be running ...
	I0816 00:38:06.688226   78713 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:06.688268   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:06.706263   78713 system_svc.go:56] duration metric: took 18.025675ms WaitForService to wait for kubelet
	I0816 00:38:06.706301   78713 kubeadm.go:582] duration metric: took 4m25.434584326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:06.706337   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:06.709536   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:06.709553   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:06.709565   78713 node_conditions.go:105] duration metric: took 3.213145ms to run NodePressure ...
	I0816 00:38:06.709576   78713 start.go:241] waiting for startup goroutines ...
	I0816 00:38:06.709582   78713 start.go:246] waiting for cluster config update ...
	I0816 00:38:06.709593   78713 start.go:255] writing updated cluster config ...
	I0816 00:38:06.709864   78713 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:06.755974   78713 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:06.757917   78713 out.go:177] * Done! kubectl is now configured to use "embed-certs-758469" cluster and "default" namespace by default
	I0816 00:38:03.959040   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:03.973674   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:03.973758   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:04.013606   79191 cri.go:89] found id: ""
	I0816 00:38:04.013653   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.013661   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:04.013667   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:04.013737   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:04.054558   79191 cri.go:89] found id: ""
	I0816 00:38:04.054590   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.054602   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:04.054609   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:04.054667   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:04.097116   79191 cri.go:89] found id: ""
	I0816 00:38:04.097143   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.097154   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:04.097162   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:04.097223   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:04.136770   79191 cri.go:89] found id: ""
	I0816 00:38:04.136798   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.136809   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:04.136816   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:04.136865   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:04.171906   79191 cri.go:89] found id: ""
	I0816 00:38:04.171929   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.171937   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:04.171943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:04.172004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:04.208694   79191 cri.go:89] found id: ""
	I0816 00:38:04.208725   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.208735   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:04.208744   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:04.208803   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:04.276713   79191 cri.go:89] found id: ""
	I0816 00:38:04.276744   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.276755   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:04.276763   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:04.276823   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:04.316646   79191 cri.go:89] found id: ""
	I0816 00:38:04.316669   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.316696   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:04.316707   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:04.316722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:04.329819   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:04.329864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:04.399032   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:04.399052   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:04.399080   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.487665   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:04.487698   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:04.530937   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.530962   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:02.496317   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:04.496477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:05.878535   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:08.377096   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:07.087584   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:07.102015   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:07.102086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:07.139530   79191 cri.go:89] found id: ""
	I0816 00:38:07.139559   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.139569   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:07.139577   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:07.139642   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:07.179630   79191 cri.go:89] found id: ""
	I0816 00:38:07.179659   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.179669   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:07.179675   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:07.179734   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:07.216407   79191 cri.go:89] found id: ""
	I0816 00:38:07.216435   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.216444   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:07.216449   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:07.216509   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:07.252511   79191 cri.go:89] found id: ""
	I0816 00:38:07.252536   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.252544   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:07.252551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:07.252613   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:07.288651   79191 cri.go:89] found id: ""
	I0816 00:38:07.288679   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.288689   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:07.288698   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:07.288757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:07.325910   79191 cri.go:89] found id: ""
	I0816 00:38:07.325963   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.325974   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:07.325982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:07.326046   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:07.362202   79191 cri.go:89] found id: ""
	I0816 00:38:07.362230   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.362244   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:07.362251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:07.362316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:07.405272   79191 cri.go:89] found id: ""
	I0816 00:38:07.405302   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.405313   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:07.405324   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:07.405339   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:07.461186   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:07.461222   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:07.475503   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:07.475544   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:07.555146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:07.555165   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:07.555179   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:07.635162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:07.635201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.174600   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:10.190418   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:10.190479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:10.251925   79191 cri.go:89] found id: ""
	I0816 00:38:10.251960   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.251969   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:10.251974   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:10.252027   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:10.289038   79191 cri.go:89] found id: ""
	I0816 00:38:10.289078   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.289088   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:10.289096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:10.289153   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:10.334562   79191 cri.go:89] found id: ""
	I0816 00:38:10.334591   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.334601   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:10.334609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:10.334669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:10.371971   79191 cri.go:89] found id: ""
	I0816 00:38:10.372000   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.372010   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:10.372018   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:10.372084   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:10.409654   79191 cri.go:89] found id: ""
	I0816 00:38:10.409685   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.409696   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:10.409703   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:10.409770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:10.446639   79191 cri.go:89] found id: ""
	I0816 00:38:10.446666   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.446675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:10.446683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:10.446750   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:10.483601   79191 cri.go:89] found id: ""
	I0816 00:38:10.483629   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.483641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:10.483648   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:10.483707   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:10.519640   79191 cri.go:89] found id: ""
	I0816 00:38:10.519670   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.519679   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:10.519690   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:10.519704   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:10.603281   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:10.603300   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:10.603311   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:10.689162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:10.689198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.730701   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:10.730724   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:10.780411   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:10.780441   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:06.997726   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:09.495539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.495753   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:10.876242   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.376332   78747 pod_ready.go:82] duration metric: took 4m0.006460655s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:38:11.376362   78747 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:38:11.376372   78747 pod_ready.go:39] duration metric: took 4m3.906659924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:38:11.376389   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:38:11.376416   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:11.376472   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:11.425716   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:11.425741   78747 cri.go:89] found id: ""
	I0816 00:38:11.425749   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:11.425804   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.431122   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:11.431195   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:11.468622   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:11.468647   78747 cri.go:89] found id: ""
	I0816 00:38:11.468657   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:11.468713   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.474270   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:11.474329   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:11.518448   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:11.518493   78747 cri.go:89] found id: ""
	I0816 00:38:11.518502   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:11.518569   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.524185   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:11.524242   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:11.561343   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:11.561367   78747 cri.go:89] found id: ""
	I0816 00:38:11.561374   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:11.561418   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.565918   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:11.565992   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:11.606010   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.606036   78747 cri.go:89] found id: ""
	I0816 00:38:11.606043   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:11.606097   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.610096   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:11.610166   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:11.646204   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:11.646229   78747 cri.go:89] found id: ""
	I0816 00:38:11.646238   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:11.646295   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.650405   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:11.650467   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:11.690407   78747 cri.go:89] found id: ""
	I0816 00:38:11.690436   78747 logs.go:276] 0 containers: []
	W0816 00:38:11.690446   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:11.690454   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:11.690510   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:11.736695   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:11.736722   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:11.736729   78747 cri.go:89] found id: ""
	I0816 00:38:11.736738   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:11.736803   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.741022   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.744983   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:11.745011   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.791452   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:11.791484   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:12.304425   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:12.304470   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:12.341318   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:12.341353   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:12.401425   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:12.401464   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:12.476598   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:12.476653   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:12.495594   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:12.495629   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:12.645961   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:12.645991   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:12.697058   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:12.697091   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:12.749085   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:12.749117   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:12.795786   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:12.795831   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:12.835928   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:12.835959   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:12.872495   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:12.872524   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:13.294689   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:13.308762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:13.308822   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:13.345973   79191 cri.go:89] found id: ""
	I0816 00:38:13.346004   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.346015   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:13.346022   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:13.346083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:13.382905   79191 cri.go:89] found id: ""
	I0816 00:38:13.382934   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.382945   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:13.382952   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:13.383001   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:13.417616   79191 cri.go:89] found id: ""
	I0816 00:38:13.417650   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.417662   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:13.417669   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:13.417739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:13.453314   79191 cri.go:89] found id: ""
	I0816 00:38:13.453350   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.453360   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:13.453368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:13.453435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:13.488507   79191 cri.go:89] found id: ""
	I0816 00:38:13.488536   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.488547   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:13.488555   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:13.488614   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:13.527064   79191 cri.go:89] found id: ""
	I0816 00:38:13.527095   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.527108   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:13.527116   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:13.527178   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:13.562838   79191 cri.go:89] found id: ""
	I0816 00:38:13.562867   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.562876   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:13.562882   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:13.562944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:13.598924   79191 cri.go:89] found id: ""
	I0816 00:38:13.598963   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.598974   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:13.598985   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:13.598999   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:13.651122   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:13.651156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:13.665255   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:13.665281   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:13.742117   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:13.742135   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:13.742148   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:13.824685   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:13.824719   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.366542   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:16.380855   79191 kubeadm.go:597] duration metric: took 4m3.665876253s to restartPrimaryControlPlane
	W0816 00:38:16.380919   79191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:38:16.380946   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:38:13.496702   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.996304   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.421355   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:15.437651   78747 api_server.go:72] duration metric: took 4m15.224557183s to wait for apiserver process to appear ...
	I0816 00:38:15.437677   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:38:15.437721   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:15.437782   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:15.473240   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:15.473265   78747 cri.go:89] found id: ""
	I0816 00:38:15.473273   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:15.473335   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.477666   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:15.477734   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:15.526073   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:15.526095   78747 cri.go:89] found id: ""
	I0816 00:38:15.526104   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:15.526165   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.530706   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:15.530775   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:15.571124   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:15.571149   78747 cri.go:89] found id: ""
	I0816 00:38:15.571159   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:15.571217   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.578613   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:15.578690   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:15.617432   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:15.617454   78747 cri.go:89] found id: ""
	I0816 00:38:15.617464   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:15.617529   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.621818   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:15.621899   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:15.658963   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:15.658981   78747 cri.go:89] found id: ""
	I0816 00:38:15.658988   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:15.659037   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.663170   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:15.663230   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:15.699297   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.699322   78747 cri.go:89] found id: ""
	I0816 00:38:15.699331   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:15.699388   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.704029   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:15.704085   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:15.742790   78747 cri.go:89] found id: ""
	I0816 00:38:15.742816   78747 logs.go:276] 0 containers: []
	W0816 00:38:15.742825   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:15.742830   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:15.742875   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:15.776898   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:15.776918   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:15.776922   78747 cri.go:89] found id: ""
	I0816 00:38:15.776945   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:15.777007   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.781511   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.785953   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:15.785981   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.840461   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:15.840498   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:16.320285   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:16.320323   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.362171   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:16.362200   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:16.444803   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:16.444834   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:16.461705   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:16.461732   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:16.576190   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:16.576220   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:16.626407   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:16.626449   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:16.673004   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:16.673036   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:16.724770   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:16.724797   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:16.764812   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:16.764838   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:16.804268   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:16.804300   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:16.841197   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:16.841221   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.380352   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:38:19.386760   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:38:19.387751   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:19.387773   78747 api_server.go:131] duration metric: took 3.950088801s to wait for apiserver health ...
	I0816 00:38:19.387781   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:19.387801   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:19.387843   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:19.429928   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:19.429952   78747 cri.go:89] found id: ""
	I0816 00:38:19.429961   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:19.430021   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.434822   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:19.434870   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:19.476789   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:19.476811   78747 cri.go:89] found id: ""
	I0816 00:38:19.476819   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:19.476869   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.481574   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:19.481640   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:19.528718   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:19.528742   78747 cri.go:89] found id: ""
	I0816 00:38:19.528750   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:19.528799   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.533391   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:19.533455   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:19.581356   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:19.581374   78747 cri.go:89] found id: ""
	I0816 00:38:19.581381   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:19.581427   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.585915   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:19.585977   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:19.623514   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:19.623544   78747 cri.go:89] found id: ""
	I0816 00:38:19.623552   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:19.623606   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.627652   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:19.627711   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:19.663933   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:19.663957   78747 cri.go:89] found id: ""
	I0816 00:38:19.663967   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:19.664032   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.668093   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:19.668162   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:19.707688   78747 cri.go:89] found id: ""
	I0816 00:38:19.707716   78747 logs.go:276] 0 containers: []
	W0816 00:38:19.707726   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:19.707741   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:19.707804   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:19.745900   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:19.745930   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.745935   78747 cri.go:89] found id: ""
	I0816 00:38:19.745944   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:19.746002   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.750934   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.755022   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:19.755044   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:19.807228   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:19.807257   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:19.918242   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:19.918274   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:21.772367   79191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.39139467s)
	I0816 00:38:21.772449   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:18.495150   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:20.995073   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:19.969165   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:19.969198   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:20.008945   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:20.008975   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:20.050080   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:20.050120   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:20.450059   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:20.450107   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:20.490694   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:20.490721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:20.532856   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:20.532890   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:20.609130   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:20.609178   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:20.624248   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:20.624279   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:20.675636   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:20.675669   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:20.716694   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:20.716721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:23.289748   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:23.289773   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.289778   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.289782   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.289786   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.289789   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.289792   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.289799   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.289814   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.289827   78747 system_pods.go:74] duration metric: took 3.902040304s to wait for pod list to return data ...
	I0816 00:38:23.289836   78747 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:23.293498   78747 default_sa.go:45] found service account: "default"
	I0816 00:38:23.293528   78747 default_sa.go:55] duration metric: took 3.671585ms for default service account to be created ...
	I0816 00:38:23.293539   78747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:23.298509   78747 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:23.298534   78747 system_pods.go:89] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.298540   78747 system_pods.go:89] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.298545   78747 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.298549   78747 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.298552   78747 system_pods.go:89] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.298556   78747 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.298561   78747 system_pods.go:89] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.298567   78747 system_pods.go:89] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.298576   78747 system_pods.go:126] duration metric: took 5.030455ms to wait for k8s-apps to be running ...
	I0816 00:38:23.298585   78747 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:23.298632   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:23.318383   78747 system_svc.go:56] duration metric: took 19.787836ms WaitForService to wait for kubelet
	I0816 00:38:23.318419   78747 kubeadm.go:582] duration metric: took 4m23.105331758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:23.318446   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:23.322398   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:23.322425   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:23.322436   78747 node_conditions.go:105] duration metric: took 3.985107ms to run NodePressure ...
	I0816 00:38:23.322447   78747 start.go:241] waiting for startup goroutines ...
	I0816 00:38:23.322454   78747 start.go:246] waiting for cluster config update ...
	I0816 00:38:23.322464   78747 start.go:255] writing updated cluster config ...
	I0816 00:38:23.322801   78747 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:23.374057   78747 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:23.376186   78747 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-616827" cluster and "default" namespace by default
	I0816 00:38:21.788969   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:38:21.800050   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:38:21.811193   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:38:21.811216   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:38:21.811260   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:38:21.821328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:38:21.821391   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:38:21.831777   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:38:21.841357   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:38:21.841424   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:38:21.851564   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.861262   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:38:21.861322   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.871929   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:38:21.881544   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:38:21.881595   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:38:21.891725   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:38:22.120640   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:38:22.997351   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:25.494851   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:27.494976   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:29.495248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:31.994586   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:33.995565   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:36.494547   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:38.495194   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:40.995653   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:42.996593   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:45.495409   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:47.496072   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:49.997645   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:52.496097   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:54.994390   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:56.995869   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:58.996230   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:01.495217   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:02.989403   78489 pod_ready.go:82] duration metric: took 4m0.001106911s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	E0816 00:39:02.989435   78489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 00:39:02.989456   78489 pod_ready.go:39] duration metric: took 4m14.547419665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:02.989488   78489 kubeadm.go:597] duration metric: took 4m21.799297957s to restartPrimaryControlPlane
	W0816 00:39:02.989550   78489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:39:02.989582   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:39:29.166109   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.176504479s)
	I0816 00:39:29.166193   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:29.188082   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:39:29.207577   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:39:29.230485   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:39:29.230510   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:39:29.230564   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:39:29.242106   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:39:29.242177   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:39:29.258756   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:39:29.272824   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:39:29.272896   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:39:29.285574   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.294909   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:39:29.294985   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.304843   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:39:29.315125   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:39:29.315173   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:39:29.325422   78489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:39:29.375775   78489 kubeadm.go:310] W0816 00:39:29.358885    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.376658   78489 kubeadm.go:310] W0816 00:39:29.359753    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.504337   78489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:39:38.219769   78489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 00:39:38.219865   78489 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:39:38.219968   78489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:39:38.220094   78489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:39:38.220215   78489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 00:39:38.220302   78489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:39:38.221971   78489 out.go:235]   - Generating certificates and keys ...
	I0816 00:39:38.222037   78489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:39:38.222119   78489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:39:38.222234   78489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:39:38.222316   78489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:39:38.222430   78489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:39:38.222509   78489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:39:38.222584   78489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:39:38.222684   78489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:39:38.222767   78489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:39:38.222831   78489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:39:38.222862   78489 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:39:38.222943   78489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:39:38.223035   78489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:39:38.223121   78489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 00:39:38.223212   78489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:39:38.223299   78489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:39:38.223355   78489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:39:38.223452   78489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:39:38.223534   78489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:39:38.225012   78489 out.go:235]   - Booting up control plane ...
	I0816 00:39:38.225086   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:39:38.225153   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:39:38.225211   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:39:38.225296   78489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:39:38.225366   78489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:39:38.225399   78489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:39:38.225542   78489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 00:39:38.225706   78489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 00:39:38.225803   78489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001324649s
	I0816 00:39:38.225917   78489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 00:39:38.226004   78489 kubeadm.go:310] [api-check] The API server is healthy after 5.001672205s
	I0816 00:39:38.226125   78489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 00:39:38.226267   78489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 00:39:38.226352   78489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 00:39:38.226537   78489 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-819398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 00:39:38.226620   78489 kubeadm.go:310] [bootstrap-token] Using token: 4qqrpj.xeaneqftblh8gcp3
	I0816 00:39:38.227962   78489 out.go:235]   - Configuring RBAC rules ...
	I0816 00:39:38.228060   78489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 00:39:38.228140   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 00:39:38.228290   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 00:39:38.228437   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 00:39:38.228558   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 00:39:38.228697   78489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 00:39:38.228877   78489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 00:39:38.228942   78489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 00:39:38.229000   78489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 00:39:38.229010   78489 kubeadm.go:310] 
	I0816 00:39:38.229086   78489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 00:39:38.229096   78489 kubeadm.go:310] 
	I0816 00:39:38.229160   78489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 00:39:38.229166   78489 kubeadm.go:310] 
	I0816 00:39:38.229186   78489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 00:39:38.229252   78489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 00:39:38.229306   78489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 00:39:38.229312   78489 kubeadm.go:310] 
	I0816 00:39:38.229361   78489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 00:39:38.229367   78489 kubeadm.go:310] 
	I0816 00:39:38.229403   78489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 00:39:38.229408   78489 kubeadm.go:310] 
	I0816 00:39:38.229447   78489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 00:39:38.229504   78489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 00:39:38.229562   78489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 00:39:38.229567   78489 kubeadm.go:310] 
	I0816 00:39:38.229636   78489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 00:39:38.229701   78489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 00:39:38.229707   78489 kubeadm.go:310] 
	I0816 00:39:38.229793   78489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.229925   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0816 00:39:38.229954   78489 kubeadm.go:310] 	--control-plane 
	I0816 00:39:38.229960   78489 kubeadm.go:310] 
	I0816 00:39:38.230029   78489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 00:39:38.230038   78489 kubeadm.go:310] 
	I0816 00:39:38.230109   78489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.230211   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0816 00:39:38.230223   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:39:38.230232   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:39:38.231742   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:39:38.233079   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:39:38.245435   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:39:38.269502   78489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:39:38.269566   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.269593   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-819398 minikube.k8s.io/updated_at=2024_08_16T00_39_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=no-preload-819398 minikube.k8s.io/primary=true
	I0816 00:39:38.304272   78489 ops.go:34] apiserver oom_adj: -16
	I0816 00:39:38.485643   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.986569   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.486177   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.985737   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.486311   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.985981   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.486071   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.986414   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.486292   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.603092   78489 kubeadm.go:1113] duration metric: took 4.333590575s to wait for elevateKubeSystemPrivileges
	I0816 00:39:42.603133   78489 kubeadm.go:394] duration metric: took 5m1.4690157s to StartCluster
	I0816 00:39:42.603158   78489 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.603258   78489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:39:42.604833   78489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.605072   78489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:39:42.605133   78489 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:39:42.605219   78489 addons.go:69] Setting storage-provisioner=true in profile "no-preload-819398"
	I0816 00:39:42.605254   78489 addons.go:234] Setting addon storage-provisioner=true in "no-preload-819398"
	I0816 00:39:42.605251   78489 addons.go:69] Setting default-storageclass=true in profile "no-preload-819398"
	I0816 00:39:42.605259   78489 addons.go:69] Setting metrics-server=true in profile "no-preload-819398"
	I0816 00:39:42.605295   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:39:42.605308   78489 addons.go:234] Setting addon metrics-server=true in "no-preload-819398"
	I0816 00:39:42.605309   78489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-819398"
	W0816 00:39:42.605320   78489 addons.go:243] addon metrics-server should already be in state true
	W0816 00:39:42.605266   78489 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:39:42.605355   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605370   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605697   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605717   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605731   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605735   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605777   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605837   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.606458   78489 out.go:177] * Verifying Kubernetes components...
	I0816 00:39:42.607740   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:39:42.622512   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0816 00:39:42.623130   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.623697   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.623720   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.624070   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.624666   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.624695   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.626221   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0816 00:39:42.626220   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0816 00:39:42.626608   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.626695   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.627158   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627179   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627329   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627346   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627490   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.627696   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.628049   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.628165   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.628189   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.632500   78489 addons.go:234] Setting addon default-storageclass=true in "no-preload-819398"
	W0816 00:39:42.632523   78489 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:39:42.632554   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.632897   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.632928   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.644779   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0816 00:39:42.645422   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.645995   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.646026   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.646395   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.646607   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.646960   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0816 00:39:42.647374   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.648126   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.648141   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.648471   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.649494   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.649732   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.651509   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.651600   78489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:39:42.652823   78489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:39:42.652936   78489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:42.652951   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:39:42.652970   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654197   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:39:42.654217   78489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:39:42.654234   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654380   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0816 00:39:42.654812   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.655316   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.655332   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.655784   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.656330   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.656356   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.659148   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659319   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659629   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659648   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659776   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659794   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659959   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660138   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660164   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660330   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660444   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660478   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660587   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.660583   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.674431   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0816 00:39:42.674827   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.675399   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.675420   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.675756   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.675993   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.677956   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.678195   78489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:42.678211   78489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:39:42.678230   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.681163   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681593   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.681615   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681916   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.682099   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.682197   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.682276   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.822056   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:39:42.840356   78489 node_ready.go:35] waiting up to 6m0s for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852864   78489 node_ready.go:49] node "no-preload-819398" has status "Ready":"True"
	I0816 00:39:42.852887   78489 node_ready.go:38] duration metric: took 12.497677ms for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852899   78489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:42.866637   78489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:42.908814   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:39:42.908832   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:39:42.949047   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:39:42.949070   78489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:39:42.959159   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:43.021536   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.021557   78489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:39:43.068214   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:43.082144   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.243834   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.243857   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244177   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244192   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.244201   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.244212   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244451   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244505   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.250358   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.250376   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.250608   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:43.250648   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.250656   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419115   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.350866587s)
	I0816 00:39:44.419166   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419175   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419519   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419545   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419542   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419561   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419573   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419824   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419836   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419851   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.436623   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.354435707s)
	I0816 00:39:44.436682   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.436697   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437131   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437150   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437160   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.437169   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437207   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.437495   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437517   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437528   78489 addons.go:475] Verifying addon metrics-server=true in "no-preload-819398"
	I0816 00:39:44.439622   78489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 00:39:44.441097   78489 addons.go:510] duration metric: took 1.835961958s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 00:39:44.878479   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:47.373009   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:49.380832   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:50.372883   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.372919   78489 pod_ready.go:82] duration metric: took 7.506242182s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.372933   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378463   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.378486   78489 pod_ready.go:82] duration metric: took 5.546402ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378496   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383347   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.383364   78489 pod_ready.go:82] duration metric: took 4.862995ms for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383374   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387672   78489 pod_ready.go:93] pod "kube-proxy-nl7g6" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.387693   78489 pod_ready.go:82] duration metric: took 4.312811ms for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387703   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391921   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.391939   78489 pod_ready.go:82] duration metric: took 4.229092ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391945   78489 pod_ready.go:39] duration metric: took 7.539034647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:50.391958   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:39:50.392005   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:39:50.407980   78489 api_server.go:72] duration metric: took 7.802877941s to wait for apiserver process to appear ...
	I0816 00:39:50.408017   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:39:50.408039   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:39:50.412234   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:39:50.413278   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:39:50.413297   78489 api_server.go:131] duration metric: took 5.273051ms to wait for apiserver health ...
	I0816 00:39:50.413304   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:39:50.573185   78489 system_pods.go:59] 9 kube-system pods found
	I0816 00:39:50.573226   78489 system_pods.go:61] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.573233   78489 system_pods.go:61] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.573239   78489 system_pods.go:61] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.573244   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.573250   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.573257   78489 system_pods.go:61] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.573262   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.573271   78489 system_pods.go:61] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.573278   78489 system_pods.go:61] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.573288   78489 system_pods.go:74] duration metric: took 159.97729ms to wait for pod list to return data ...
	I0816 00:39:50.573301   78489 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:39:50.771164   78489 default_sa.go:45] found service account: "default"
	I0816 00:39:50.771189   78489 default_sa.go:55] duration metric: took 197.881739ms for default service account to be created ...
	I0816 00:39:50.771198   78489 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:39:50.973415   78489 system_pods.go:86] 9 kube-system pods found
	I0816 00:39:50.973448   78489 system_pods.go:89] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.973453   78489 system_pods.go:89] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.973457   78489 system_pods.go:89] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.973461   78489 system_pods.go:89] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.973465   78489 system_pods.go:89] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.973468   78489 system_pods.go:89] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.973471   78489 system_pods.go:89] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.973477   78489 system_pods.go:89] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.973482   78489 system_pods.go:89] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.973491   78489 system_pods.go:126] duration metric: took 202.288008ms to wait for k8s-apps to be running ...
	I0816 00:39:50.973498   78489 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:39:50.973539   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:50.989562   78489 system_svc.go:56] duration metric: took 16.053781ms WaitForService to wait for kubelet
	I0816 00:39:50.989595   78489 kubeadm.go:582] duration metric: took 8.384495377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:39:50.989618   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:39:51.171076   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:39:51.171109   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:39:51.171120   78489 node_conditions.go:105] duration metric: took 181.496732ms to run NodePressure ...
	I0816 00:39:51.171134   78489 start.go:241] waiting for startup goroutines ...
	I0816 00:39:51.171144   78489 start.go:246] waiting for cluster config update ...
	I0816 00:39:51.171157   78489 start.go:255] writing updated cluster config ...
	I0816 00:39:51.171465   78489 ssh_runner.go:195] Run: rm -f paused
	I0816 00:39:51.220535   78489 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:39:51.223233   78489 out.go:177] * Done! kubectl is now configured to use "no-preload-819398" cluster and "default" namespace by default
	I0816 00:40:18.143220   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:40:18.143333   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:40:18.144757   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.144804   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.144888   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.145018   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.145134   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:18.145210   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:18.146791   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:18.146879   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:18.146965   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:18.147072   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:18.147164   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:18.147258   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:18.147340   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:18.147434   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:18.147525   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:18.147613   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:18.147708   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:18.147744   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:18.147791   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:18.147839   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:18.147916   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:18.147989   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:18.148045   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:18.148194   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:18.148318   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:18.148365   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:18.148458   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:18.149817   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:18.149941   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:18.150044   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:18.150107   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:18.150187   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:18.150323   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:40:18.150380   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:40:18.150460   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150671   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.150766   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150953   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151033   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151232   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151305   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151520   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151614   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151840   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151856   79191 kubeadm.go:310] 
	I0816 00:40:18.151917   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:40:18.151978   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:40:18.151992   79191 kubeadm.go:310] 
	I0816 00:40:18.152046   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:40:18.152097   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:40:18.152204   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:40:18.152218   79191 kubeadm.go:310] 
	I0816 00:40:18.152314   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:40:18.152349   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:40:18.152377   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:40:18.152384   79191 kubeadm.go:310] 
	I0816 00:40:18.152466   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:40:18.152537   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:40:18.152543   79191 kubeadm.go:310] 
	I0816 00:40:18.152674   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:40:18.152769   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:40:18.152853   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:40:18.152914   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:40:18.152978   79191 kubeadm.go:310] 
	W0816 00:40:18.153019   79191 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:40:18.153055   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:40:18.634058   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:40:18.648776   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:40:18.659504   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:40:18.659529   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:40:18.659584   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:40:18.670234   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:40:18.670285   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:40:18.680370   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:40:18.689496   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:40:18.689557   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:40:18.698949   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.708056   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:40:18.708118   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.718261   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:40:18.728708   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:40:18.728777   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:40:18.739253   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:40:18.819666   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.819746   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.966568   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.966704   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.966868   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:19.168323   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:19.170213   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:19.170335   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:19.170464   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:19.170546   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:19.170598   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:19.170670   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:19.170740   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:19.170828   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:19.170924   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:19.171031   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:19.171129   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:19.171179   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:19.171261   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:19.421256   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:19.585260   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:19.672935   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:19.928620   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:19.952420   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:19.953527   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:19.953578   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:20.090384   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:20.092904   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:20.093037   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:20.105743   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:20.106980   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:20.108199   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:20.111014   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:41:00.113053   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:41:00.113479   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:00.113752   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:05.113795   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:05.114091   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:15.114695   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:15.114932   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:35.116019   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:35.116207   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.116728   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:42:15.116994   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.117018   79191 kubeadm.go:310] 
	I0816 00:42:15.117071   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:42:15.117136   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:42:15.117147   79191 kubeadm.go:310] 
	I0816 00:42:15.117198   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:42:15.117248   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:42:15.117402   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:42:15.117412   79191 kubeadm.go:310] 
	I0816 00:42:15.117543   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:42:15.117601   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:42:15.117636   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:42:15.117644   79191 kubeadm.go:310] 
	I0816 00:42:15.117778   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:42:15.117918   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:42:15.117929   79191 kubeadm.go:310] 
	I0816 00:42:15.118083   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:42:15.118215   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:42:15.118313   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:42:15.118412   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:42:15.118433   79191 kubeadm.go:310] 
	I0816 00:42:15.118582   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:42:15.118698   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:42:15.118843   79191 kubeadm.go:394] duration metric: took 8m2.460648867s to StartCluster
	I0816 00:42:15.118855   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:42:15.118891   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:42:15.118957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:42:15.162809   79191 cri.go:89] found id: ""
	I0816 00:42:15.162837   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.162848   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:42:15.162855   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:42:15.162925   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:42:15.198020   79191 cri.go:89] found id: ""
	I0816 00:42:15.198042   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.198053   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:42:15.198063   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:42:15.198132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:42:15.238168   79191 cri.go:89] found id: ""
	I0816 00:42:15.238197   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.238206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:42:15.238213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:42:15.238273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:42:15.278364   79191 cri.go:89] found id: ""
	I0816 00:42:15.278391   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.278401   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:42:15.278407   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:42:15.278465   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:42:15.316182   79191 cri.go:89] found id: ""
	I0816 00:42:15.316209   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.316216   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:42:15.316222   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:42:15.316278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:42:15.352934   79191 cri.go:89] found id: ""
	I0816 00:42:15.352962   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.352970   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:42:15.352976   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:42:15.353031   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:42:15.388940   79191 cri.go:89] found id: ""
	I0816 00:42:15.388966   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.388973   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:42:15.388983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:42:15.389042   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:42:15.424006   79191 cri.go:89] found id: ""
	I0816 00:42:15.424035   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.424043   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:42:15.424054   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:42:15.424073   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:42:15.504823   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:42:15.504846   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:42:15.504858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:42:15.608927   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:42:15.608959   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:42:15.676785   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:42:15.676810   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:42:15.744763   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:42:15.744805   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0816 00:42:15.760944   79191 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:42:15.761012   79191 out.go:270] * 
	W0816 00:42:15.761078   79191 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.761098   79191 out.go:270] * 
	W0816 00:42:15.762220   79191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:42:15.765697   79191 out.go:201] 
	W0816 00:42:15.766942   79191 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.767018   79191 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:42:15.767040   79191 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:42:15.768526   79191 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.330579378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769333330555835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52c14d39-cc93-4553-bc88-f5bc6440e6f0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.331135528Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=509c0b72-5b6f-4b84-b5d6-760c5860e2dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.331189355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=509c0b72-5b6f-4b84-b5d6-760c5860e2dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.331384712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7,PodSandboxId:5d189d1e30f4c889864fa8d722d32f71349ca7e9216ab3ef1b3f2ac90f9b1698,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768784823389322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b813a00-5eeb-468e-8591-e3d83ddb1556,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648,PodSandboxId:7ef517f1733e4b675d9de404f63f0d5ed642f3566154dce7f5175384cf626bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784269560736,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wqr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a3f3eb-5b2c-4bca-a1c6-b33beca82a09,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d,PodSandboxId:7faedbe535f7ba3d9aa5791920129ae1f4dce33577c5a00cefa5d97e6c316cd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784001697037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5gdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
2bb7c6-b9f2-44b2-bff1-e7c5f163c208,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468,PodSandboxId:300eb6af029dd8f572627fabd88ee3f2617fffdda32f6ec7f326a00e85e4eeeb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723768783477900167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nl7g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4697f7b9-3f79-451d-927e-15eb68e88eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e,PodSandboxId:f49268e0f7d8c9800128a7855b6a3cf120983757de5c7ad2314282da4b8b9559,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768772300131134,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567ee3d9ca9b16f959e11b063db2324,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b,PodSandboxId:d1c9dd5db18ce5cf978534a308e54369c13ffd5b6ffec01c10549298d456c46d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768772287724863,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874cabf22af8702efdca4d9dd5ad535a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256,PodSandboxId:aeceefc585992ac479585092f9c98ffd57752d9644b5e8f6689975c675a79167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768772273391465,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac513da8f7badd477e959cdb64321d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07,PodSandboxId:77bff51c2f0926049ae59fc52ec7a5046a459d0e899288505478cfe8017363ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768772222494974,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97,PodSandboxId:2002911dadf2841da6d0ad5d91504520b92c59428ce5f1a3242e50bf610707cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723768483610400780,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=509c0b72-5b6f-4b84-b5d6-760c5860e2dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.370347804Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=563f08f8-8966-4643-96be-50b922892027 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.370417048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=563f08f8-8966-4643-96be-50b922892027 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.372231109Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=474f4c45-e3c2-4be3-b2fc-783e2a382300 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.372569142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769333372548882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=474f4c45-e3c2-4be3-b2fc-783e2a382300 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.373149241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69b6a58c-11e4-4be6-815f-54bf20f8a909 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.373201206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69b6a58c-11e4-4be6-815f-54bf20f8a909 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.373388108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7,PodSandboxId:5d189d1e30f4c889864fa8d722d32f71349ca7e9216ab3ef1b3f2ac90f9b1698,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768784823389322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b813a00-5eeb-468e-8591-e3d83ddb1556,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648,PodSandboxId:7ef517f1733e4b675d9de404f63f0d5ed642f3566154dce7f5175384cf626bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784269560736,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wqr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a3f3eb-5b2c-4bca-a1c6-b33beca82a09,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d,PodSandboxId:7faedbe535f7ba3d9aa5791920129ae1f4dce33577c5a00cefa5d97e6c316cd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784001697037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5gdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
2bb7c6-b9f2-44b2-bff1-e7c5f163c208,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468,PodSandboxId:300eb6af029dd8f572627fabd88ee3f2617fffdda32f6ec7f326a00e85e4eeeb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723768783477900167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nl7g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4697f7b9-3f79-451d-927e-15eb68e88eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e,PodSandboxId:f49268e0f7d8c9800128a7855b6a3cf120983757de5c7ad2314282da4b8b9559,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768772300131134,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567ee3d9ca9b16f959e11b063db2324,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b,PodSandboxId:d1c9dd5db18ce5cf978534a308e54369c13ffd5b6ffec01c10549298d456c46d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768772287724863,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874cabf22af8702efdca4d9dd5ad535a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256,PodSandboxId:aeceefc585992ac479585092f9c98ffd57752d9644b5e8f6689975c675a79167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768772273391465,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac513da8f7badd477e959cdb64321d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07,PodSandboxId:77bff51c2f0926049ae59fc52ec7a5046a459d0e899288505478cfe8017363ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768772222494974,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97,PodSandboxId:2002911dadf2841da6d0ad5d91504520b92c59428ce5f1a3242e50bf610707cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723768483610400780,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69b6a58c-11e4-4be6-815f-54bf20f8a909 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.414495726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16cc6f17-72e8-4b9a-a9de-8e2aaed7d1d1 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.414572798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16cc6f17-72e8-4b9a-a9de-8e2aaed7d1d1 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.415626531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fdc6f77a-1e41-4c77-bc9d-a945ecec8462 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.415964908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769333415945641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdc6f77a-1e41-4c77-bc9d-a945ecec8462 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.416442738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2f9e6cb-faa1-495d-8237-7949bb30c765 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.416493411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2f9e6cb-faa1-495d-8237-7949bb30c765 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.416681580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7,PodSandboxId:5d189d1e30f4c889864fa8d722d32f71349ca7e9216ab3ef1b3f2ac90f9b1698,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768784823389322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b813a00-5eeb-468e-8591-e3d83ddb1556,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648,PodSandboxId:7ef517f1733e4b675d9de404f63f0d5ed642f3566154dce7f5175384cf626bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784269560736,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wqr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a3f3eb-5b2c-4bca-a1c6-b33beca82a09,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d,PodSandboxId:7faedbe535f7ba3d9aa5791920129ae1f4dce33577c5a00cefa5d97e6c316cd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784001697037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5gdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
2bb7c6-b9f2-44b2-bff1-e7c5f163c208,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468,PodSandboxId:300eb6af029dd8f572627fabd88ee3f2617fffdda32f6ec7f326a00e85e4eeeb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723768783477900167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nl7g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4697f7b9-3f79-451d-927e-15eb68e88eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e,PodSandboxId:f49268e0f7d8c9800128a7855b6a3cf120983757de5c7ad2314282da4b8b9559,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768772300131134,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567ee3d9ca9b16f959e11b063db2324,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b,PodSandboxId:d1c9dd5db18ce5cf978534a308e54369c13ffd5b6ffec01c10549298d456c46d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768772287724863,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874cabf22af8702efdca4d9dd5ad535a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256,PodSandboxId:aeceefc585992ac479585092f9c98ffd57752d9644b5e8f6689975c675a79167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768772273391465,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac513da8f7badd477e959cdb64321d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07,PodSandboxId:77bff51c2f0926049ae59fc52ec7a5046a459d0e899288505478cfe8017363ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768772222494974,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97,PodSandboxId:2002911dadf2841da6d0ad5d91504520b92c59428ce5f1a3242e50bf610707cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723768483610400780,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2f9e6cb-faa1-495d-8237-7949bb30c765 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.454105816Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75182228-17cd-40a4-be11-3effaa605da8 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.454179567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75182228-17cd-40a4-be11-3effaa605da8 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.455315716Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8830ae5-e03e-4758-a7e7-73d8306f1919 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.455647179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769333455626133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8830ae5-e03e-4758-a7e7-73d8306f1919 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.456232449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cafc343c-647b-41e3-a32b-062fbed63f05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.456284052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cafc343c-647b-41e3-a32b-062fbed63f05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:48:53 no-preload-819398 crio[729]: time="2024-08-16 00:48:53.456736188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7,PodSandboxId:5d189d1e30f4c889864fa8d722d32f71349ca7e9216ab3ef1b3f2ac90f9b1698,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768784823389322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b813a00-5eeb-468e-8591-e3d83ddb1556,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648,PodSandboxId:7ef517f1733e4b675d9de404f63f0d5ed642f3566154dce7f5175384cf626bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784269560736,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wqr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a3f3eb-5b2c-4bca-a1c6-b33beca82a09,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d,PodSandboxId:7faedbe535f7ba3d9aa5791920129ae1f4dce33577c5a00cefa5d97e6c316cd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784001697037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5gdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
2bb7c6-b9f2-44b2-bff1-e7c5f163c208,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468,PodSandboxId:300eb6af029dd8f572627fabd88ee3f2617fffdda32f6ec7f326a00e85e4eeeb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723768783477900167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nl7g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4697f7b9-3f79-451d-927e-15eb68e88eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e,PodSandboxId:f49268e0f7d8c9800128a7855b6a3cf120983757de5c7ad2314282da4b8b9559,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768772300131134,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567ee3d9ca9b16f959e11b063db2324,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b,PodSandboxId:d1c9dd5db18ce5cf978534a308e54369c13ffd5b6ffec01c10549298d456c46d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768772287724863,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874cabf22af8702efdca4d9dd5ad535a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256,PodSandboxId:aeceefc585992ac479585092f9c98ffd57752d9644b5e8f6689975c675a79167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768772273391465,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac513da8f7badd477e959cdb64321d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07,PodSandboxId:77bff51c2f0926049ae59fc52ec7a5046a459d0e899288505478cfe8017363ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768772222494974,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97,PodSandboxId:2002911dadf2841da6d0ad5d91504520b92c59428ce5f1a3242e50bf610707cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723768483610400780,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cafc343c-647b-41e3-a32b-062fbed63f05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f6b16872e7c9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5d189d1e30f4c       storage-provisioner
	7966e96977b9c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7ef517f1733e4       coredns-6f6b679f8f-wqr8r
	6785d05a6b876       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7faedbe535f7b       coredns-6f6b679f8f-5gdv9
	f5533173575d6       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   300eb6af029dd       kube-proxy-nl7g6
	45de6162ae2e1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   f49268e0f7d8c       etcd-no-preload-819398
	8ef6ec95b8ba0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   d1c9dd5db18ce       kube-scheduler-no-preload-819398
	f5abeabb7b474       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   aeceefc585992       kube-controller-manager-no-preload-819398
	a12ec55a551e3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   77bff51c2f092       kube-apiserver-no-preload-819398
	d261dba4ec9c9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   2002911dadf28       kube-apiserver-no-preload-819398
	
	
	==> coredns [6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-819398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-819398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=no-preload-819398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T00_39_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 00:39:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-819398
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:48:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 00:44:54 +0000   Fri, 16 Aug 2024 00:39:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 00:44:54 +0000   Fri, 16 Aug 2024 00:39:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 00:44:54 +0000   Fri, 16 Aug 2024 00:39:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 00:44:54 +0000   Fri, 16 Aug 2024 00:39:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.15
	  Hostname:    no-preload-819398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bf6cdb904364dc486e6cbe723db5d1c
	  System UUID:                5bf6cdb9-0436-4dc4-86e6-cbe723db5d1c
	  Boot ID:                    44c8d6dd-79df-4822-926d-e4e2fbe958e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5gdv9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-6f6b679f8f-wqr8r                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-no-preload-819398                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-no-preload-819398             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-no-preload-819398    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-nl7g6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-no-preload-819398             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-dz5h4              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m9s   kube-proxy       
	  Normal  Starting                 9m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s  kubelet          Node no-preload-819398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s  kubelet          Node no-preload-819398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s  kubelet          Node no-preload-819398 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node no-preload-819398 event: Registered Node no-preload-819398 in Controller
	
	
	==> dmesg <==
	[  +0.060975] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043298] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.190561] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.605423] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581074] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.178451] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.061494] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065291] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.162833] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.150679] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.288770] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[ +16.023834] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.058376] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.149325] systemd-fstab-generator[1431]: Ignoring "noauto" option for root device
	[  +4.162014] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.256111] kauditd_printk_skb: 86 callbacks suppressed
	[Aug16 00:39] systemd-fstab-generator[3078]: Ignoring "noauto" option for root device
	[  +0.065413] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.503380] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +0.082001] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.322754] systemd-fstab-generator[3531]: Ignoring "noauto" option for root device
	[  +0.119334] kauditd_printk_skb: 12 callbacks suppressed
	[Aug16 00:41] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e] <==
	{"level":"info","ts":"2024-08-16T00:39:32.702211Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T00:39:32.702597Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4e5e32f94c376694","initial-advertise-peer-urls":["https://192.168.61.15:2380"],"listen-peer-urls":["https://192.168.61.15:2380"],"advertise-client-urls":["https://192.168.61.15:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.15:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T00:39:32.702692Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T00:39:32.702988Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.15:2380"}
	{"level":"info","ts":"2024-08-16T00:39:32.703596Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.15:2380"}
	{"level":"info","ts":"2024-08-16T00:39:33.424196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T00:39:33.424309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T00:39:33.424365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 received MsgPreVoteResp from 4e5e32f94c376694 at term 1"}
	{"level":"info","ts":"2024-08-16T00:39:33.424401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T00:39:33.424426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 received MsgVoteResp from 4e5e32f94c376694 at term 2"}
	{"level":"info","ts":"2024-08-16T00:39:33.424452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became leader at term 2"}
	{"level":"info","ts":"2024-08-16T00:39:33.424479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e5e32f94c376694 elected leader 4e5e32f94c376694 at term 2"}
	{"level":"info","ts":"2024-08-16T00:39:33.428375Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:39:33.430481Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e5e32f94c376694","local-member-attributes":"{Name:no-preload-819398 ClientURLs:[https://192.168.61.15:2379]}","request-path":"/0/members/4e5e32f94c376694/attributes","cluster-id":"cec272b56a0b2be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T00:39:33.430714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:39:33.430904Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cec272b56a0b2be","local-member-id":"4e5e32f94c376694","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:39:33.433156Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:39:33.433230Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:39:33.433370Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:39:33.436544Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:39:33.439838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T00:39:33.459199Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T00:39:33.459290Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T00:39:33.460735Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:39:33.482856Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.15:2379"}
	
	
	==> kernel <==
	 00:48:53 up 14 min,  0 users,  load average: 0.19, 0.16, 0.15
	Linux no-preload-819398 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07] <==
	W0816 00:44:36.074280       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:44:36.074523       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:44:36.075657       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:44:36.075725       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:45:36.076751       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:45:36.076830       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 00:45:36.076883       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:45:36.076934       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:45:36.077982       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:45:36.078139       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:47:36.078220       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 00:47:36.078327       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:47:36.078538       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0816 00:47:36.078734       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:47:36.079979       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:47:36.080147       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97] <==
	W0816 00:39:24.102891       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.117710       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.200672       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.211779       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.221428       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.271181       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.311341       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.379348       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.412328       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.491760       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.524198       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.545686       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.680026       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.814217       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:27.959167       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:28.162616       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:28.685492       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:28.851692       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:28.923132       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.016245       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.041846       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.060767       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.075294       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.144697       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.156243       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256] <==
	E0816 00:43:42.071819       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:43:42.537572       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:44:12.080141       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:44:12.553845       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:44:42.087187       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:44:42.562758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:44:54.252416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-819398"
	E0816 00:45:12.093528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:45:12.571673       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:45:42.100803       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:45:42.579320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:45:48.543941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="272.016µs"
	I0816 00:46:03.543117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="163.838µs"
	E0816 00:46:12.107218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:46:12.593890       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:46:42.114116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:46:42.602267       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:47:12.120560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:47:12.610038       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:47:42.126408       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:47:42.619671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:48:12.133633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:48:12.637242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:48:42.140035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:48:42.645502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 00:39:43.883727       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 00:39:43.895937       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.15"]
	E0816 00:39:43.896000       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 00:39:43.984283       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 00:39:43.985813       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 00:39:43.985894       1 server_linux.go:169] "Using iptables Proxier"
	I0816 00:39:44.000580       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 00:39:44.000808       1 server.go:483] "Version info" version="v1.31.0"
	I0816 00:39:44.000819       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:39:44.007960       1 config.go:197] "Starting service config controller"
	I0816 00:39:44.008133       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 00:39:44.008235       1 config.go:104] "Starting endpoint slice config controller"
	I0816 00:39:44.008263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 00:39:44.012553       1 config.go:326] "Starting node config controller"
	I0816 00:39:44.012630       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 00:39:44.113401       1 shared_informer.go:320] Caches are synced for node config
	I0816 00:39:44.113460       1 shared_informer.go:320] Caches are synced for service config
	I0816 00:39:44.113511       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b] <==
	W0816 00:39:35.125008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 00:39:35.125036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:35.125158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 00:39:35.125188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:35.125232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 00:39:35.125243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:35.125281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 00:39:35.125309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:35.928288       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 00:39:35.928354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.017805       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 00:39:36.017862       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 00:39:36.091805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 00:39:36.091859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.146604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 00:39:36.146653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.146713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 00:39:36.146724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.346030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 00:39:36.346129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.367204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 00:39:36.367255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.384596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 00:39:36.384648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 00:39:37.918628       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 00:47:44 no-preload-819398 kubelet[3408]: E0816 00:47:44.526684    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:47:47 no-preload-819398 kubelet[3408]: E0816 00:47:47.703152    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769267702802949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:47 no-preload-819398 kubelet[3408]: E0816 00:47:47.703194    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769267702802949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:56 no-preload-819398 kubelet[3408]: E0816 00:47:56.527458    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:47:57 no-preload-819398 kubelet[3408]: E0816 00:47:57.707674    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769277704789713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:47:57 no-preload-819398 kubelet[3408]: E0816 00:47:57.707767    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769277704789713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:07 no-preload-819398 kubelet[3408]: E0816 00:48:07.710260    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769287709731269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:07 no-preload-819398 kubelet[3408]: E0816 00:48:07.711616    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769287709731269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:10 no-preload-819398 kubelet[3408]: E0816 00:48:10.526963    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:48:17 no-preload-819398 kubelet[3408]: E0816 00:48:17.713233    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769297712816516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:17 no-preload-819398 kubelet[3408]: E0816 00:48:17.713298    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769297712816516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:22 no-preload-819398 kubelet[3408]: E0816 00:48:22.527418    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:48:27 no-preload-819398 kubelet[3408]: E0816 00:48:27.717118    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769307716214397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:27 no-preload-819398 kubelet[3408]: E0816 00:48:27.717677    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769307716214397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:35 no-preload-819398 kubelet[3408]: E0816 00:48:35.531558    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:48:37 no-preload-819398 kubelet[3408]: E0816 00:48:37.551979    3408 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 00:48:37 no-preload-819398 kubelet[3408]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 00:48:37 no-preload-819398 kubelet[3408]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 00:48:37 no-preload-819398 kubelet[3408]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 00:48:37 no-preload-819398 kubelet[3408]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 00:48:37 no-preload-819398 kubelet[3408]: E0816 00:48:37.720774    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769317720149819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:37 no-preload-819398 kubelet[3408]: E0816 00:48:37.720822    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769317720149819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:47 no-preload-819398 kubelet[3408]: E0816 00:48:47.528983    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:48:47 no-preload-819398 kubelet[3408]: E0816 00:48:47.723010    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769327722513968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:48:47 no-preload-819398 kubelet[3408]: E0816 00:48:47.723221    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769327722513968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7] <==
	I0816 00:39:44.909499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 00:39:44.919770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 00:39:44.921370       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 00:39:44.931933       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 00:39:44.932207       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-819398_8c560925-8e6c-46e7-a19f-5e6bb7d0cd3f!
	I0816 00:39:44.934517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c373b1e-4f23-4ee3-b37a-25fd9a0ead7f", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-819398_8c560925-8e6c-46e7-a19f-5e6bb7d0cd3f became leader
	I0816 00:39:45.033350       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-819398_8c560925-8e6c-46e7-a19f-5e6bb7d0cd3f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-819398 -n no-preload-819398
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-819398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-dz5h4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-819398 describe pod metrics-server-6867b74b74-dz5h4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-819398 describe pod metrics-server-6867b74b74-dz5h4: exit status 1 (104.310068ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-dz5h4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-819398 describe pod metrics-server-6867b74b74-dz5h4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:42:28.472911   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:42:51.159583   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:42:54.572850   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:43:32.865436   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:43:37.364523   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:43:43.006694   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:43:51.534853   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:44:21.520069   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:44:53.799705   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:45:00.429888   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:45:06.069314   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:45:25.212406   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:45:44.583745   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:46:31.509780   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:47:28.472998   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:47:51.159810   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:47:56.870922   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:48:37.364854   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:48:43.006861   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:49:21.519551   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:49:53.799603   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:50:25.213168   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 2 (226.578974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-098619" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 2 (222.963514ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-098619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-098619 logs -n 25: (1.652343093s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-697641 sudo cat                              | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo find                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo crio                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-697641                                       | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-067133 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | disable-driver-mounts-067133                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:25 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-819398             | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC | 16 Aug 24 00:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-758469            | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-616827  | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:29:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:29:51.785297   79191 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:29:51.785388   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785392   79191 out.go:358] Setting ErrFile to fd 2...
	I0816 00:29:51.785396   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785578   79191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:29:51.786145   79191 out.go:352] Setting JSON to false
	I0816 00:29:51.787066   79191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7892,"bootTime":1723760300,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:29:51.787122   79191 start.go:139] virtualization: kvm guest
	I0816 00:29:51.789057   79191 out.go:177] * [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:29:51.790274   79191 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:29:51.790269   79191 notify.go:220] Checking for updates...
	I0816 00:29:51.792828   79191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:29:51.794216   79191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:29:51.795553   79191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:29:51.796761   79191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:29:51.798018   79191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:29:51.799561   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:29:51.799935   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.799990   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.814617   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0816 00:29:51.815056   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.815584   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.815606   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.815933   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.816131   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:51.817809   79191 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 00:29:51.819204   79191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:29:51.819604   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.819652   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.834270   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0816 00:29:51.834584   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.834992   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.835015   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.835303   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.835478   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:49.226097   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:51.870472   79191 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:29:51.872031   79191 start.go:297] selected driver: kvm2
	I0816 00:29:51.872049   79191 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.872137   79191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:29:51.872785   79191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.872848   79191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:29:51.887731   79191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:29:51.888078   79191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:29:51.888141   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:29:51.888154   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:29:51.888203   79191 start.go:340] cluster config:
	{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.888300   79191 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.890190   79191 out.go:177] * Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	I0816 00:29:51.891529   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:29:51.891557   79191 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:29:51.891565   79191 cache.go:56] Caching tarball of preloaded images
	I0816 00:29:51.891645   79191 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:29:51.891664   79191 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:29:51.891747   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:29:51.891915   79191 start.go:360] acquireMachinesLock for old-k8s-version-098619: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:29:55.306158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:58.378266   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:04.458137   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:07.530158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:13.610160   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:16.682057   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:22.762088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:25.834157   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:31.914106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:34.986091   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:41.066143   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:44.138152   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:50.218140   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:53.290166   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:59.370080   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:02.442130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:08.522126   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:11.594144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:17.674104   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:20.746185   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:26.826131   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:29.898113   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:35.978100   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:39.050136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:45.130120   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:48.202078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:54.282078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:57.354088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:03.434136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:06.506153   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:12.586125   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:15.658144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:21.738130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:24.810191   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:30.890130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:33.962132   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:40.042062   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:43.114154   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:49.194151   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:52.266130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:58.346106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:01.418139   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:04.422042   78713 start.go:364] duration metric: took 4m25.166768519s to acquireMachinesLock for "embed-certs-758469"
	I0816 00:33:04.422099   78713 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:04.422107   78713 fix.go:54] fixHost starting: 
	I0816 00:33:04.422426   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:04.422458   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:04.437335   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I0816 00:33:04.437779   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:04.438284   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:04.438306   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:04.438646   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:04.438873   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:04.439045   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:04.440597   78713 fix.go:112] recreateIfNeeded on embed-certs-758469: state=Stopped err=<nil>
	I0816 00:33:04.440627   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	W0816 00:33:04.440781   78713 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:04.442527   78713 out.go:177] * Restarting existing kvm2 VM for "embed-certs-758469" ...
	I0816 00:33:04.419735   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:04.419772   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420077   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:33:04.420102   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420299   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:33:04.421914   78489 machine.go:96] duration metric: took 4m37.429789672s to provisionDockerMachine
	I0816 00:33:04.421957   78489 fix.go:56] duration metric: took 4m37.451098771s for fixHost
	I0816 00:33:04.421965   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 4m37.451130669s
	W0816 00:33:04.421995   78489 start.go:714] error starting host: provision: host is not running
	W0816 00:33:04.422099   78489 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 00:33:04.422111   78489 start.go:729] Will try again in 5 seconds ...
	I0816 00:33:04.443838   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Start
	I0816 00:33:04.444035   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring networks are active...
	I0816 00:33:04.444849   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network default is active
	I0816 00:33:04.445168   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network mk-embed-certs-758469 is active
	I0816 00:33:04.445491   78713 main.go:141] libmachine: (embed-certs-758469) Getting domain xml...
	I0816 00:33:04.446159   78713 main.go:141] libmachine: (embed-certs-758469) Creating domain...
	I0816 00:33:05.654817   78713 main.go:141] libmachine: (embed-certs-758469) Waiting to get IP...
	I0816 00:33:05.655625   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.656020   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.656064   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.655983   79868 retry.go:31] will retry after 273.341379ms: waiting for machine to come up
	I0816 00:33:05.930542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.931038   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.931061   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.931001   79868 retry.go:31] will retry after 320.172619ms: waiting for machine to come up
	I0816 00:33:06.252718   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.253117   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.253140   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.253091   79868 retry.go:31] will retry after 441.386495ms: waiting for machine to come up
	I0816 00:33:06.695681   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.696108   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.696134   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.696065   79868 retry.go:31] will retry after 491.272986ms: waiting for machine to come up
	I0816 00:33:07.188683   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.189070   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.189092   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.189025   79868 retry.go:31] will retry after 536.865216ms: waiting for machine to come up
	I0816 00:33:07.727831   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.728246   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.728276   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.728193   79868 retry.go:31] will retry after 813.064342ms: waiting for machine to come up
	I0816 00:33:08.543096   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:08.543605   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:08.543637   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:08.543549   79868 retry.go:31] will retry after 1.00495091s: waiting for machine to come up
	I0816 00:33:09.424586   78489 start.go:360] acquireMachinesLock for no-preload-819398: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:33:09.549815   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:09.550226   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:09.550255   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:09.550175   79868 retry.go:31] will retry after 1.483015511s: waiting for machine to come up
	I0816 00:33:11.034879   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:11.035277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:11.035315   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:11.035224   79868 retry.go:31] will retry after 1.513237522s: waiting for machine to come up
	I0816 00:33:12.550817   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:12.551172   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:12.551196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:12.551126   79868 retry.go:31] will retry after 1.483165174s: waiting for machine to come up
	I0816 00:33:14.036748   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:14.037142   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:14.037170   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:14.037087   79868 retry.go:31] will retry after 1.772679163s: waiting for machine to come up
	I0816 00:33:15.811699   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:15.812300   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:15.812334   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:15.812226   79868 retry.go:31] will retry after 3.026936601s: waiting for machine to come up
	I0816 00:33:18.842362   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:18.842759   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:18.842788   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:18.842715   79868 retry.go:31] will retry after 4.400445691s: waiting for machine to come up
	I0816 00:33:23.247813   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248223   78713 main.go:141] libmachine: (embed-certs-758469) Found IP for machine: 192.168.39.185
	I0816 00:33:23.248254   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has current primary IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248265   78713 main.go:141] libmachine: (embed-certs-758469) Reserving static IP address...
	I0816 00:33:23.248613   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.248641   78713 main.go:141] libmachine: (embed-certs-758469) DBG | skip adding static IP to network mk-embed-certs-758469 - found existing host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"}
	I0816 00:33:23.248654   78713 main.go:141] libmachine: (embed-certs-758469) Reserved static IP address: 192.168.39.185
	I0816 00:33:23.248673   78713 main.go:141] libmachine: (embed-certs-758469) Waiting for SSH to be available...
	I0816 00:33:23.248687   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Getting to WaitForSSH function...
	I0816 00:33:23.250607   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.250931   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.250965   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.251113   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH client type: external
	I0816 00:33:23.251141   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa (-rw-------)
	I0816 00:33:23.251179   78713 main.go:141] libmachine: (embed-certs-758469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:23.251196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | About to run SSH command:
	I0816 00:33:23.251211   78713 main.go:141] libmachine: (embed-certs-758469) DBG | exit 0
	I0816 00:33:23.373899   78713 main.go:141] libmachine: (embed-certs-758469) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:23.374270   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetConfigRaw
	I0816 00:33:23.374914   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.377034   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377343   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.377370   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377561   78713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/config.json ...
	I0816 00:33:23.377760   78713 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:23.377776   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:23.378014   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.379950   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380248   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.380277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380369   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.380524   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380668   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380795   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.380950   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.381134   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.381145   78713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:23.486074   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:23.486106   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486462   78713 buildroot.go:166] provisioning hostname "embed-certs-758469"
	I0816 00:33:23.486491   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486677   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.489520   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.489905   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.489924   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.490108   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.490279   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490427   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490566   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.490730   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.490901   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.490920   78713 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-758469 && echo "embed-certs-758469" | sudo tee /etc/hostname
	I0816 00:33:23.614635   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-758469
	
	I0816 00:33:23.614671   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.617308   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617673   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.617701   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617881   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.618087   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618351   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.618536   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.618721   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.618746   78713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-758469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-758469/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-758469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:23.734901   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:23.734931   78713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:23.734946   78713 buildroot.go:174] setting up certificates
	I0816 00:33:23.734953   78713 provision.go:84] configureAuth start
	I0816 00:33:23.734961   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.735255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.737952   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738312   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.738341   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738445   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.740589   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.740926   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.740953   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.741060   78713 provision.go:143] copyHostCerts
	I0816 00:33:23.741121   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:23.741138   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:23.741203   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:23.741357   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:23.741367   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:23.741393   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:23.741452   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:23.741458   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:23.741478   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:23.741525   78713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.embed-certs-758469 san=[127.0.0.1 192.168.39.185 embed-certs-758469 localhost minikube]
	I0816 00:33:23.871115   78713 provision.go:177] copyRemoteCerts
	I0816 00:33:23.871167   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:23.871190   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.874049   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874505   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.874538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874720   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.874913   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.875079   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.875210   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:23.959910   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:23.984454   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:33:24.009067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:24.036195   78713 provision.go:87] duration metric: took 301.229994ms to configureAuth
	I0816 00:33:24.036218   78713 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:24.036389   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:24.036453   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.039196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.039562   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039771   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.039970   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040125   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040224   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.040372   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.040584   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.040612   78713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:24.550693   78747 start.go:364] duration metric: took 4m44.527028624s to acquireMachinesLock for "default-k8s-diff-port-616827"
	I0816 00:33:24.550757   78747 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:24.550763   78747 fix.go:54] fixHost starting: 
	I0816 00:33:24.551164   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:24.551203   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:24.567741   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0816 00:33:24.568138   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:24.568674   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:33:24.568703   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:24.569017   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:24.569212   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:24.569385   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:33:24.570856   78747 fix.go:112] recreateIfNeeded on default-k8s-diff-port-616827: state=Stopped err=<nil>
	I0816 00:33:24.570901   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	W0816 00:33:24.571074   78747 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:24.572673   78747 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-616827" ...
	I0816 00:33:24.574220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Start
	I0816 00:33:24.574403   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring networks are active...
	I0816 00:33:24.575086   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network default is active
	I0816 00:33:24.575528   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network mk-default-k8s-diff-port-616827 is active
	I0816 00:33:24.576033   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Getting domain xml...
	I0816 00:33:24.576734   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Creating domain...
	I0816 00:33:24.314921   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:24.314951   78713 machine.go:96] duration metric: took 937.178488ms to provisionDockerMachine
	I0816 00:33:24.314964   78713 start.go:293] postStartSetup for "embed-certs-758469" (driver="kvm2")
	I0816 00:33:24.314974   78713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:24.315007   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.315405   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:24.315430   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.317962   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318242   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.318270   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318390   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.318588   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.318763   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.318900   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.400628   78713 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:24.405061   78713 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:24.405082   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:24.405148   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:24.405215   78713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:24.405302   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:24.414985   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:24.439646   78713 start.go:296] duration metric: took 124.668147ms for postStartSetup
	I0816 00:33:24.439692   78713 fix.go:56] duration metric: took 20.017583324s for fixHost
	I0816 00:33:24.439719   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.442551   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.442920   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.442954   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.443051   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.443257   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443434   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443567   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.443740   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.443912   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.443921   78713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:24.550562   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768404.525876526
	
	I0816 00:33:24.550588   78713 fix.go:216] guest clock: 1723768404.525876526
	I0816 00:33:24.550599   78713 fix.go:229] Guest: 2024-08-16 00:33:24.525876526 +0000 UTC Remote: 2024-08-16 00:33:24.439696953 +0000 UTC m=+285.318245053 (delta=86.179573ms)
	I0816 00:33:24.550618   78713 fix.go:200] guest clock delta is within tolerance: 86.179573ms
	I0816 00:33:24.550623   78713 start.go:83] releasing machines lock for "embed-certs-758469", held for 20.128541713s
	I0816 00:33:24.550647   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.551090   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:24.554013   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554358   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.554382   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554572   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555062   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555222   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555279   78713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:24.555330   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.555441   78713 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:24.555463   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.558216   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558368   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558567   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558719   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558723   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558742   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558925   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559074   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559205   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559285   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.559329   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.656926   78713 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:24.662590   78713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:24.811290   78713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:24.817486   78713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:24.817570   78713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:24.838317   78713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:24.838342   78713 start.go:495] detecting cgroup driver to use...
	I0816 00:33:24.838396   78713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:24.856294   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:24.875603   78713 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:24.875650   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:24.890144   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:24.904327   78713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:25.018130   78713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:25.149712   78713 docker.go:233] disabling docker service ...
	I0816 00:33:25.149795   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:25.165494   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:25.179554   78713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:25.330982   78713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:25.476436   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:25.493242   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:25.515688   78713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:25.515762   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.529924   78713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:25.529997   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.541412   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.551836   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.563356   78713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:25.574486   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.585533   78713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.604169   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.615335   78713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:25.629366   78713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:25.629427   78713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:25.645937   78713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:25.657132   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:25.771891   78713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:25.914817   78713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:25.914904   78713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:25.919572   78713 start.go:563] Will wait 60s for crictl version
	I0816 00:33:25.919620   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:33:25.923419   78713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:25.969387   78713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:25.969484   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.002529   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.035709   78713 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:26.036921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:26.039638   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040001   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:26.040023   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040254   78713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:26.044444   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:26.057172   78713 kubeadm.go:883] updating cluster {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:26.057326   78713 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:26.057382   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:26.093950   78713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:26.094031   78713 ssh_runner.go:195] Run: which lz4
	I0816 00:33:26.097998   78713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:26.102152   78713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:26.102183   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:27.538323   78713 crio.go:462] duration metric: took 1.440354469s to copy over tarball
	I0816 00:33:27.538400   78713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:25.885210   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting to get IP...
	I0816 00:33:25.886135   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886555   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:25.886538   80004 retry.go:31] will retry after 214.751664ms: waiting for machine to come up
	I0816 00:33:26.103182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103652   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103677   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.103603   80004 retry.go:31] will retry after 239.667632ms: waiting for machine to come up
	I0816 00:33:26.345223   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345776   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.345701   80004 retry.go:31] will retry after 474.740445ms: waiting for machine to come up
	I0816 00:33:26.822224   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822682   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822716   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.822639   80004 retry.go:31] will retry after 574.324493ms: waiting for machine to come up
	I0816 00:33:27.398433   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398939   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398971   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.398904   80004 retry.go:31] will retry after 567.388033ms: waiting for machine to come up
	I0816 00:33:27.967686   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968225   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.968093   80004 retry.go:31] will retry after 940.450394ms: waiting for machine to come up
	I0816 00:33:28.910549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911058   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911088   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:28.911031   80004 retry.go:31] will retry after 919.494645ms: waiting for machine to come up
	I0816 00:33:29.832687   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833204   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833244   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:29.833189   80004 retry.go:31] will retry after 1.332024716s: waiting for machine to come up
	I0816 00:33:29.677224   78713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.138774475s)
	I0816 00:33:29.677252   78713 crio.go:469] duration metric: took 2.138901242s to extract the tarball
	I0816 00:33:29.677261   78713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:29.716438   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:29.768597   78713 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:29.768622   78713 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:29.768634   78713 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.0 crio true true} ...
	I0816 00:33:29.768787   78713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-758469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:29.768874   78713 ssh_runner.go:195] Run: crio config
	I0816 00:33:29.813584   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:29.813607   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:29.813620   78713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:29.813644   78713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-758469 NodeName:embed-certs-758469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:29.813776   78713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-758469"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:29.813862   78713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:29.825680   78713 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:29.825744   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:29.836314   78713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 00:33:29.853030   78713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:29.869368   78713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 00:33:29.886814   78713 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:29.890644   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:29.903138   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:30.040503   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:30.058323   78713 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469 for IP: 192.168.39.185
	I0816 00:33:30.058351   78713 certs.go:194] generating shared ca certs ...
	I0816 00:33:30.058372   78713 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:30.058559   78713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:30.058624   78713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:30.058638   78713 certs.go:256] generating profile certs ...
	I0816 00:33:30.058778   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/client.key
	I0816 00:33:30.058873   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key.0d0e36ad
	I0816 00:33:30.058930   78713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key
	I0816 00:33:30.059101   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:30.059146   78713 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:30.059162   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:30.059197   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:30.059251   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:30.059285   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:30.059345   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:30.060202   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:30.098381   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:30.135142   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:30.175518   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:30.214349   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 00:33:30.249278   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:30.273772   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:30.298067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:30.324935   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:30.351149   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:30.375636   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:30.399250   78713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:30.417646   78713 ssh_runner.go:195] Run: openssl version
	I0816 00:33:30.423691   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:30.435254   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439651   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439700   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.445673   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:30.456779   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:30.467848   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472199   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472274   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.478109   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:30.489481   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:30.500747   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505116   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505162   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.510739   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:30.521829   78713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:30.526444   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:30.532373   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:30.538402   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:30.544697   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:30.550762   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:30.556573   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:30.562513   78713 kubeadm.go:392] StartCluster: {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:30.562602   78713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:30.562650   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.607119   78713 cri.go:89] found id: ""
	I0816 00:33:30.607197   78713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:30.617798   78713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:30.617818   78713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:30.617873   78713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:30.627988   78713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:30.628976   78713 kubeconfig.go:125] found "embed-certs-758469" server: "https://192.168.39.185:8443"
	I0816 00:33:30.631601   78713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:30.642001   78713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.185
	I0816 00:33:30.642036   78713 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:30.642047   78713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:30.642088   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.685946   78713 cri.go:89] found id: ""
	I0816 00:33:30.686049   78713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:30.704130   78713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:30.714467   78713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:30.714490   78713 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:30.714534   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:33:30.723924   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:30.723985   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:30.733804   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:33:30.743345   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:30.743412   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:30.753604   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.763271   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:30.763340   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.773121   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:33:30.782507   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:30.782565   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:30.792652   78713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:30.802523   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:30.923193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.206424   78713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.283195087s)
	I0816 00:33:32.206449   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.435275   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.509193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.590924   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:32.591020   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.091804   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.591198   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.607568   78713 api_server.go:72] duration metric: took 1.016656713s to wait for apiserver process to appear ...
	I0816 00:33:33.607596   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:33.607619   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:31.166506   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166900   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166927   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:31.166860   80004 retry.go:31] will retry after 1.213971674s: waiting for machine to come up
	I0816 00:33:32.382376   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382862   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382889   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:32.382821   80004 retry.go:31] will retry after 2.115615681s: waiting for machine to come up
	I0816 00:33:34.501236   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501697   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:34.501646   80004 retry.go:31] will retry after 2.495252025s: waiting for machine to come up
	I0816 00:33:36.334341   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.334374   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.334389   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.351971   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.352011   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.608364   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.614582   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:36.614619   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.107654   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.113352   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.113384   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.607902   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.614677   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.614710   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.108329   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.112493   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.112521   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.608061   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.613134   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.613172   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.107667   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.111920   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:39.111954   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.608190   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.613818   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:33:39.619467   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:39.619490   78713 api_server.go:131] duration metric: took 6.011887872s to wait for apiserver health ...
	I0816 00:33:39.619499   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:39.619504   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:39.621572   78713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:36.999158   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999616   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999645   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:36.999576   80004 retry.go:31] will retry after 2.736710806s: waiting for machine to come up
	I0816 00:33:39.737818   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738286   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738320   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:39.738215   80004 retry.go:31] will retry after 3.3205645s: waiting for machine to come up
	I0816 00:33:39.623254   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:39.633910   78713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:39.653736   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:39.663942   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:39.663983   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:39.663994   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:39.664044   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:39.664060   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:39.664067   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:33:39.664078   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:39.664089   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:39.664107   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:33:39.664118   78713 system_pods.go:74] duration metric: took 10.358906ms to wait for pod list to return data ...
	I0816 00:33:39.664127   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:39.667639   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:39.667669   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:39.667682   78713 node_conditions.go:105] duration metric: took 3.547018ms to run NodePressure ...
	I0816 00:33:39.667701   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:39.929620   78713 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934264   78713 kubeadm.go:739] kubelet initialised
	I0816 00:33:39.934289   78713 kubeadm.go:740] duration metric: took 4.64037ms waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934299   78713 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:39.938771   78713 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.943735   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943760   78713 pod_ready.go:82] duration metric: took 4.962601ms for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.943772   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943781   78713 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.947900   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947925   78713 pod_ready.go:82] duration metric: took 4.129605ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.947936   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947943   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.953367   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953400   78713 pod_ready.go:82] duration metric: took 5.445682ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.953412   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953422   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.057510   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057533   78713 pod_ready.go:82] duration metric: took 104.099944ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.057543   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057548   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.458355   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458389   78713 pod_ready.go:82] duration metric: took 400.832009ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.458400   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458408   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.857939   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857964   78713 pod_ready.go:82] duration metric: took 399.549123ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.857974   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857980   78713 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:41.257101   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257126   78713 pod_ready.go:82] duration metric: took 399.13078ms for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:41.257135   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257142   78713 pod_ready.go:39] duration metric: took 1.322827054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:41.257159   78713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:33:41.269076   78713 ops.go:34] apiserver oom_adj: -16
	I0816 00:33:41.269098   78713 kubeadm.go:597] duration metric: took 10.651273415s to restartPrimaryControlPlane
	I0816 00:33:41.269107   78713 kubeadm.go:394] duration metric: took 10.706599955s to StartCluster
	I0816 00:33:41.269127   78713 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.269191   78713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:33:41.271380   78713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.271679   78713 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:33:41.271714   78713 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:33:41.271812   78713 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-758469"
	I0816 00:33:41.271834   78713 addons.go:69] Setting default-storageclass=true in profile "embed-certs-758469"
	I0816 00:33:41.271845   78713 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-758469"
	W0816 00:33:41.271858   78713 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:33:41.271874   78713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-758469"
	I0816 00:33:41.271882   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:41.271891   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.271860   78713 addons.go:69] Setting metrics-server=true in profile "embed-certs-758469"
	I0816 00:33:41.271934   78713 addons.go:234] Setting addon metrics-server=true in "embed-certs-758469"
	W0816 00:33:41.271952   78713 addons.go:243] addon metrics-server should already be in state true
	I0816 00:33:41.272022   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.272324   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272575   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272604   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272704   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272718   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272745   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.274599   78713 out.go:177] * Verifying Kubernetes components...
	I0816 00:33:41.276283   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:41.292526   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0816 00:33:41.292560   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0816 00:33:41.292556   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I0816 00:33:41.293000   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293053   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293004   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293482   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293499   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293592   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293606   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293625   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293607   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293891   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293939   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293976   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.294132   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.294475   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294483   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294517   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.294522   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.297714   78713 addons.go:234] Setting addon default-storageclass=true in "embed-certs-758469"
	W0816 00:33:41.297747   78713 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:33:41.297787   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.298192   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.298238   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.310002   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0816 00:33:41.310000   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0816 00:33:41.310469   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310521   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310899   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.310917   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311027   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.311048   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311293   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311476   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.311491   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311642   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.313614   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.313697   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.315474   78713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:33:41.315484   78713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:33:41.316719   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0816 00:33:41.316887   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:33:41.316902   78713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:33:41.316921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.316975   78713 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.316985   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:33:41.316995   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.317061   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.317572   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.317594   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.317941   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.318669   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.318702   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.320288   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320668   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.320695   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320726   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320939   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321241   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.321267   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.321402   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321497   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.321547   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321592   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.321883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.322021   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.334230   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0816 00:33:41.334580   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.335088   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.335107   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.335387   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.335549   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.336891   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.337084   78713 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.337100   78713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:33:41.337115   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.340204   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340667   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.340697   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340837   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.340987   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.341120   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.341277   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.476131   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:41.502242   78713 node_ready.go:35] waiting up to 6m0s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:41.559562   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.575913   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:33:41.575937   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:33:41.614763   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:33:41.614784   78713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:33:41.628658   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.670367   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:41.670393   78713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:33:41.746638   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:42.849125   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.22043382s)
	I0816 00:33:42.849189   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849202   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849397   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.289807606s)
	I0816 00:33:42.849438   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849448   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849478   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849514   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849524   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849538   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849550   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849761   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849803   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849813   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849825   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849833   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.850018   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850033   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.850059   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850059   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.850078   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856398   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.856419   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.856647   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.856667   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856676   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901261   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1545817s)
	I0816 00:33:42.901314   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901329   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901619   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901680   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901694   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901704   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901713   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901953   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901973   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901986   78713 addons.go:475] Verifying addon metrics-server=true in "embed-certs-758469"
	I0816 00:33:42.904677   78713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 00:33:42.905802   78713 addons.go:510] duration metric: took 1.634089536s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 00:33:43.506584   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:44.254575   79191 start.go:364] duration metric: took 3m52.362627542s to acquireMachinesLock for "old-k8s-version-098619"
	I0816 00:33:44.254648   79191 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:44.254659   79191 fix.go:54] fixHost starting: 
	I0816 00:33:44.255099   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:44.255137   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:44.271236   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0816 00:33:44.271591   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:44.272030   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:33:44.272052   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:44.272328   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:44.272503   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:33:44.272660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetState
	I0816 00:33:44.274235   79191 fix.go:112] recreateIfNeeded on old-k8s-version-098619: state=Stopped err=<nil>
	I0816 00:33:44.274272   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	W0816 00:33:44.274415   79191 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:44.275978   79191 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-098619" ...
	I0816 00:33:43.059949   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060413   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Found IP for machine: 192.168.50.128
	I0816 00:33:43.060440   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserving static IP address...
	I0816 00:33:43.060479   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has current primary IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060881   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.060906   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | skip adding static IP to network mk-default-k8s-diff-port-616827 - found existing host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"}
	I0816 00:33:43.060921   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserved static IP address: 192.168.50.128
	I0816 00:33:43.060937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for SSH to be available...
	I0816 00:33:43.060952   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Getting to WaitForSSH function...
	I0816 00:33:43.063249   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063552   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.063592   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH client type: external
	I0816 00:33:43.063833   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa (-rw-------)
	I0816 00:33:43.063877   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:43.063896   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | About to run SSH command:
	I0816 00:33:43.063905   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | exit 0
	I0816 00:33:43.185986   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:43.186338   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetConfigRaw
	I0816 00:33:43.186944   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.189324   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189617   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.189643   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189890   78747 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/config.json ...
	I0816 00:33:43.190166   78747 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:43.190192   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:43.190401   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.192515   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192836   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.192865   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192940   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.193118   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193280   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193454   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.193614   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.193812   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.193825   78747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:43.290143   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:43.290168   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290395   78747 buildroot.go:166] provisioning hostname "default-k8s-diff-port-616827"
	I0816 00:33:43.290422   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290603   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.293231   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.293665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293829   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.294038   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294195   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294325   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.294479   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.294685   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.294703   78747 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-616827 && echo "default-k8s-diff-port-616827" | sudo tee /etc/hostname
	I0816 00:33:43.406631   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-616827
	
	I0816 00:33:43.406655   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.409271   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409610   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.409641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409794   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.409984   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410160   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.410491   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.410670   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.410695   78747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-616827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-616827/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-616827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:43.515766   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:43.515796   78747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:43.515829   78747 buildroot.go:174] setting up certificates
	I0816 00:33:43.515841   78747 provision.go:84] configureAuth start
	I0816 00:33:43.515850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.516128   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.518730   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519055   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.519087   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.521186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.521538   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521691   78747 provision.go:143] copyHostCerts
	I0816 00:33:43.521746   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:43.521764   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:43.521822   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:43.521949   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:43.521959   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:43.521982   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:43.522050   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:43.522057   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:43.522074   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:43.522132   78747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-616827 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-616827 localhost minikube]
	I0816 00:33:43.601126   78747 provision.go:177] copyRemoteCerts
	I0816 00:33:43.601179   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:43.601203   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.603816   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604148   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.604180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.604549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.604725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.604863   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:43.686829   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:43.712297   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 00:33:43.738057   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:43.762820   78747 provision.go:87] duration metric: took 246.967064ms to configureAuth
	I0816 00:33:43.762853   78747 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:43.763069   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:43.763155   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.765886   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766256   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.766287   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766447   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.766641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766813   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.767164   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.767318   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.767334   78747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:44.025337   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:44.025373   78747 machine.go:96] duration metric: took 835.190539ms to provisionDockerMachine
	I0816 00:33:44.025387   78747 start.go:293] postStartSetup for "default-k8s-diff-port-616827" (driver="kvm2")
	I0816 00:33:44.025401   78747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:44.025416   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.025780   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:44.025804   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.028307   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028591   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.028618   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028740   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.028925   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.029117   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.029281   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.109481   78747 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:44.115290   78747 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:44.115317   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:44.115388   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:44.115482   78747 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:44.115597   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:44.128677   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:44.154643   78747 start.go:296] duration metric: took 129.242138ms for postStartSetup
	I0816 00:33:44.154685   78747 fix.go:56] duration metric: took 19.603921801s for fixHost
	I0816 00:33:44.154705   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.157477   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.157907   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.157937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.158051   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.158264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158411   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158580   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.158757   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:44.158981   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:44.158996   78747 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:44.254419   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768424.226223949
	
	I0816 00:33:44.254443   78747 fix.go:216] guest clock: 1723768424.226223949
	I0816 00:33:44.254452   78747 fix.go:229] Guest: 2024-08-16 00:33:44.226223949 +0000 UTC Remote: 2024-08-16 00:33:44.154688835 +0000 UTC m=+304.265683075 (delta=71.535114ms)
	I0816 00:33:44.254476   78747 fix.go:200] guest clock delta is within tolerance: 71.535114ms
	I0816 00:33:44.254482   78747 start.go:83] releasing machines lock for "default-k8s-diff-port-616827", held for 19.703745588s
	I0816 00:33:44.254504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.254750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:44.257516   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.257879   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.257910   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.258111   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258828   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258908   78747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:44.258946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.259033   78747 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:44.259048   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.261566   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261814   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261978   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262008   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262112   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262145   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262254   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262390   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262442   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262502   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.262549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262642   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.346934   78747 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:44.370413   78747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:44.519130   78747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:44.525276   78747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:44.525344   78747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:44.549125   78747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:44.549154   78747 start.go:495] detecting cgroup driver to use...
	I0816 00:33:44.549227   78747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:44.575221   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:44.592214   78747 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:44.592270   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:44.607403   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:44.629127   78747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:44.786185   78747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:44.954426   78747 docker.go:233] disabling docker service ...
	I0816 00:33:44.954495   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:44.975169   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:44.994113   78747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:45.142572   78747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:45.297255   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:45.313401   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:45.334780   78747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:45.334851   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.346039   78747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:45.346111   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.357681   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.368607   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.381164   78747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:45.394060   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.406010   78747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.424720   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.437372   78747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:45.450515   78747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:45.450595   78747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:45.465740   78747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:45.476568   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:45.629000   78747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:45.781044   78747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:45.781142   78747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:45.787480   78747 start.go:563] Will wait 60s for crictl version
	I0816 00:33:45.787551   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:33:45.791907   78747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:45.836939   78747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:45.837025   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.869365   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.907162   78747 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:44.277288   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .Start
	I0816 00:33:44.277426   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring networks are active...
	I0816 00:33:44.278141   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network default is active
	I0816 00:33:44.278471   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network mk-old-k8s-version-098619 is active
	I0816 00:33:44.278820   79191 main.go:141] libmachine: (old-k8s-version-098619) Getting domain xml...
	I0816 00:33:44.279523   79191 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:33:45.643704   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting to get IP...
	I0816 00:33:45.644691   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.645213   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.645247   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.645162   80212 retry.go:31] will retry after 198.057532ms: waiting for machine to come up
	I0816 00:33:45.844756   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.845297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.845321   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.845247   80212 retry.go:31] will retry after 288.630433ms: waiting for machine to come up
	I0816 00:33:46.135913   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.136413   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.136442   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.136365   80212 retry.go:31] will retry after 456.48021ms: waiting for machine to come up
	I0816 00:33:46.594170   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.594649   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.594678   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.594592   80212 retry.go:31] will retry after 501.49137ms: waiting for machine to come up
	I0816 00:33:46.006040   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:47.007144   78713 node_ready.go:49] node "embed-certs-758469" has status "Ready":"True"
	I0816 00:33:47.007172   78713 node_ready.go:38] duration metric: took 5.504897396s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:47.007183   78713 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:47.014800   78713 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:49.022567   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:45.908518   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:45.912248   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.912762   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:45.912797   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.913115   78747 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:45.917917   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:45.935113   78747 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:45.935294   78747 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:45.935351   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:45.988031   78747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:45.988115   78747 ssh_runner.go:195] Run: which lz4
	I0816 00:33:45.992508   78747 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:45.997108   78747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:45.997199   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:47.459404   78747 crio.go:462] duration metric: took 1.466928999s to copy over tarball
	I0816 00:33:47.459478   78747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:49.621449   78747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16194292s)
	I0816 00:33:49.621484   78747 crio.go:469] duration metric: took 2.162054092s to extract the tarball
	I0816 00:33:49.621494   78747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:49.660378   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:49.709446   78747 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:49.709471   78747 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:49.709481   78747 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.0 crio true true} ...
	I0816 00:33:49.709609   78747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-616827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:49.709704   78747 ssh_runner.go:195] Run: crio config
	I0816 00:33:49.756470   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:49.756497   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:49.756510   78747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:49.756534   78747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-616827 NodeName:default-k8s-diff-port-616827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:49.756745   78747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-616827"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:49.756827   78747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:49.766769   78747 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:49.766840   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:49.776367   78747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 00:33:49.793191   78747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:49.811993   78747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 00:33:49.829787   78747 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:49.833673   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:49.846246   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:47.098130   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.098614   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.098645   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.098569   80212 retry.go:31] will retry after 663.568587ms: waiting for machine to come up
	I0816 00:33:47.763930   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.764447   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.764470   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.764376   80212 retry.go:31] will retry after 679.581678ms: waiting for machine to come up
	I0816 00:33:48.446082   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:48.446552   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:48.446579   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:48.446498   80212 retry.go:31] will retry after 1.090430732s: waiting for machine to come up
	I0816 00:33:49.538961   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:49.539454   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:49.539482   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:49.539397   80212 retry.go:31] will retry after 1.039148258s: waiting for machine to come up
	I0816 00:33:50.579642   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:50.580119   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:50.580144   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:50.580074   80212 retry.go:31] will retry after 1.440992413s: waiting for machine to come up
	I0816 00:33:51.788858   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:54.022577   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:49.963020   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:49.980142   78747 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827 for IP: 192.168.50.128
	I0816 00:33:49.980170   78747 certs.go:194] generating shared ca certs ...
	I0816 00:33:49.980192   78747 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:49.980408   78747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:49.980470   78747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:49.980489   78747 certs.go:256] generating profile certs ...
	I0816 00:33:49.980583   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/client.key
	I0816 00:33:49.980669   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key.2062a467
	I0816 00:33:49.980737   78747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key
	I0816 00:33:49.980891   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:49.980940   78747 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:49.980949   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:49.980984   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:49.981021   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:49.981050   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:49.981102   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:49.981835   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:50.014530   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:50.057377   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:50.085730   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:50.121721   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 00:33:50.166448   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:50.195059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:50.220059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:50.244288   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:50.268463   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:50.293203   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:50.318859   78747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:50.336625   78747 ssh_runner.go:195] Run: openssl version
	I0816 00:33:50.343301   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:50.355408   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360245   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360312   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.366435   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:50.377753   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:50.389482   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394337   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394419   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.400279   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:50.412410   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:50.424279   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429013   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429077   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.435095   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:50.448148   78747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:50.453251   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:50.459730   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:50.466145   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:50.472438   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:50.478701   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:50.485081   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:50.490958   78747 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:50.491091   78747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:50.491173   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.545458   78747 cri.go:89] found id: ""
	I0816 00:33:50.545532   78747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:50.557054   78747 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:50.557074   78747 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:50.557122   78747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:50.570313   78747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:50.571774   78747 kubeconfig.go:125] found "default-k8s-diff-port-616827" server: "https://192.168.50.128:8444"
	I0816 00:33:50.574969   78747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:50.586066   78747 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I0816 00:33:50.586101   78747 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:50.586114   78747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:50.586172   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.631347   78747 cri.go:89] found id: ""
	I0816 00:33:50.631416   78747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:50.651296   78747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:50.665358   78747 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:50.665387   78747 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:50.665427   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 00:33:50.678634   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:50.678706   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:50.690376   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 00:33:50.702070   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:50.702132   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:50.714117   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.725349   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:50.725413   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.735691   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 00:33:50.745524   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:50.745598   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:50.756310   78747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:50.771825   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:50.908593   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.046812   78747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138178717s)
	I0816 00:33:52.046863   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.282111   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.357877   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.485435   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:52.485531   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:52.985717   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.486461   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.522663   78747 api_server.go:72] duration metric: took 1.037234176s to wait for apiserver process to appear ...
	I0816 00:33:53.522692   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:53.522713   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:52.022573   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:52.023319   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:52.023352   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:52.023226   80212 retry.go:31] will retry after 1.814668747s: waiting for machine to come up
	I0816 00:33:53.839539   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:53.839916   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:53.839944   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:53.839861   80212 retry.go:31] will retry after 1.900379439s: waiting for machine to come up
	I0816 00:33:55.742480   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:55.742981   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:55.743004   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:55.742920   80212 retry.go:31] will retry after 2.798728298s: waiting for machine to come up
	I0816 00:33:56.782681   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.782714   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:56.782730   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:56.828595   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.828628   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:57.022870   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.028291   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.028326   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:57.522858   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.533079   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.533120   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.023304   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.029913   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:58.029948   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.523517   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.529934   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:33:58.536872   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:58.536898   78747 api_server.go:131] duration metric: took 5.014199256s to wait for apiserver health ...
	I0816 00:33:58.536907   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:58.536916   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:58.539004   78747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:54.522157   78713 pod_ready.go:93] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.522186   78713 pod_ready.go:82] duration metric: took 7.507358513s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.522201   78713 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529305   78713 pod_ready.go:93] pod "etcd-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.529323   78713 pod_ready.go:82] duration metric: took 7.114484ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529331   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536656   78713 pod_ready.go:93] pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.536688   78713 pod_ready.go:82] duration metric: took 7.349231ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536701   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542615   78713 pod_ready.go:93] pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.542637   78713 pod_ready.go:82] duration metric: took 5.927403ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542650   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548165   78713 pod_ready.go:93] pod "kube-proxy-4xc89" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.548188   78713 pod_ready.go:82] duration metric: took 5.530073ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548200   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919561   78713 pod_ready.go:93] pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.919586   78713 pod_ready.go:82] duration metric: took 371.377774ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919598   78713 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:56.925892   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.926811   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.540592   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:58.554493   78747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:58.594341   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:58.605247   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:58.605293   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:58.605304   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:58.605314   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:58.605329   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:58.605342   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:33:58.605351   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:58.605358   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:58.605363   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:33:58.605372   78747 system_pods.go:74] duration metric: took 11.009517ms to wait for pod list to return data ...
	I0816 00:33:58.605384   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:58.609964   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:58.609996   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:58.610007   78747 node_conditions.go:105] duration metric: took 4.615471ms to run NodePressure ...
	I0816 00:33:58.610025   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:58.930292   78747 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937469   78747 kubeadm.go:739] kubelet initialised
	I0816 00:33:58.937499   78747 kubeadm.go:740] duration metric: took 7.181814ms waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937509   78747 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:59.036968   78747 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.046554   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046589   78747 pod_ready.go:82] duration metric: took 9.589918ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.046601   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046618   78747 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.053621   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053654   78747 pod_ready.go:82] duration metric: took 7.022323ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.053669   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053678   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.065329   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065357   78747 pod_ready.go:82] duration metric: took 11.650757ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.065378   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065387   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.074595   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074627   78747 pod_ready.go:82] duration metric: took 9.230183ms for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.074643   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074657   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.399077   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399105   78747 pod_ready.go:82] duration metric: took 324.440722ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.399116   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399124   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.797130   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797158   78747 pod_ready.go:82] duration metric: took 398.024149ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.797169   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797176   78747 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:00.197929   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197961   78747 pod_ready.go:82] duration metric: took 400.777243ms for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:34:00.197976   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197992   78747 pod_ready.go:39] duration metric: took 1.260464876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:00.198024   78747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:34:00.210255   78747 ops.go:34] apiserver oom_adj: -16
	I0816 00:34:00.210278   78747 kubeadm.go:597] duration metric: took 9.653197586s to restartPrimaryControlPlane
	I0816 00:34:00.210302   78747 kubeadm.go:394] duration metric: took 9.719364617s to StartCluster
	I0816 00:34:00.210322   78747 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.210405   78747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:00.212730   78747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.213053   78747 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:34:00.213162   78747 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:34:00.213247   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:00.213277   78747 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213292   78747 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213305   78747 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213313   78747 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:34:00.213344   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213352   78747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-616827"
	I0816 00:34:00.213298   78747 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213413   78747 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213435   78747 addons.go:243] addon metrics-server should already be in state true
	I0816 00:34:00.213463   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213751   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213795   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213752   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213886   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213756   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213992   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.215058   78747 out.go:177] * Verifying Kubernetes components...
	I0816 00:34:00.216719   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:00.229428   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0816 00:34:00.229676   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0816 00:34:00.229881   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230164   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230522   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230538   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230689   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230727   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230850   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.231488   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.231512   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.231754   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.232394   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.232426   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.232909   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0816 00:34:00.233400   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.233959   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.233979   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.234368   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.234576   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.238180   78747 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.238203   78747 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:34:00.238230   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.238598   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.238642   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.249682   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0816 00:34:00.250163   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.250894   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.250919   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.251326   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0816 00:34:00.251324   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.251663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.251828   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.252294   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.252318   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.252863   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.253070   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.253746   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.254958   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.255056   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0816 00:34:00.255513   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.256043   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.256083   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.256121   78747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:00.256494   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.257255   78747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:34:00.257377   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.257422   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.259132   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:34:00.259154   78747 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:34:00.259176   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.259204   78747 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.259223   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:34:00.259241   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.263096   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263213   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263688   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263874   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263996   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264175   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264441   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.264511   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264695   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.274557   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0816 00:34:00.274984   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.275444   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.275463   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.275735   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.275946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.277509   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.277745   78747 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.277762   78747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:34:00.277782   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.280264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280660   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.280689   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280790   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.280982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.281140   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.281286   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.445986   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:00.465112   78747 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:00.568927   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.602693   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.620335   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:34:00.620355   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:34:00.667790   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:34:00.667810   78747 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:34:00.698510   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.698536   78747 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:34:00.723319   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.975635   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.975663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976006   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976007   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976030   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.976044   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.976075   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976347   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976340   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976376   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.983280   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.983304   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.983587   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.983586   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.983620   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.678707   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678733   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.678889   78747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.076166351s)
	I0816 00:34:01.678936   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678955   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679115   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679136   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679145   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679153   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679473   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679497   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679484   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679514   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679521   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679525   78747 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-616827"
	I0816 00:34:01.679528   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679537   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679544   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679821   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679862   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679887   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.683006   78747 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 00:33:58.543282   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:58.543753   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:58.543783   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:58.543689   80212 retry.go:31] will retry after 4.402812235s: waiting for machine to come up
	I0816 00:34:00.927244   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:03.428032   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:04.178649   78489 start.go:364] duration metric: took 54.753990439s to acquireMachinesLock for "no-preload-819398"
	I0816 00:34:04.178706   78489 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:34:04.178714   78489 fix.go:54] fixHost starting: 
	I0816 00:34:04.179124   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:04.179162   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:04.195783   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0816 00:34:04.196138   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:04.196590   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:34:04.196614   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:04.196962   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:04.197161   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:04.197303   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:34:04.198795   78489 fix.go:112] recreateIfNeeded on no-preload-819398: state=Stopped err=<nil>
	I0816 00:34:04.198814   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	W0816 00:34:04.198978   78489 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:34:04.200736   78489 out.go:177] * Restarting existing kvm2 VM for "no-preload-819398" ...
	I0816 00:34:01.684641   78747 addons.go:510] duration metric: took 1.471480873s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 00:34:02.473603   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:04.476035   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:02.951078   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951631   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has current primary IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951672   79191 main.go:141] libmachine: (old-k8s-version-098619) Found IP for machine: 192.168.72.137
	I0816 00:34:02.951687   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserving static IP address...
	I0816 00:34:02.952154   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserved static IP address: 192.168.72.137
	I0816 00:34:02.952186   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.952201   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting for SSH to be available...
	I0816 00:34:02.952224   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | skip adding static IP to network mk-old-k8s-version-098619 - found existing host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"}
	I0816 00:34:02.952236   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Getting to WaitForSSH function...
	I0816 00:34:02.954361   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954686   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.954715   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954791   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH client type: external
	I0816 00:34:02.954830   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa (-rw-------)
	I0816 00:34:02.954871   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:02.954890   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | About to run SSH command:
	I0816 00:34:02.954909   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | exit 0
	I0816 00:34:03.078035   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:03.078408   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:34:03.079002   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.081041   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081391   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.081489   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081566   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:34:03.081748   79191 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:03.081767   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.082007   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.084022   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084333   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.084357   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084499   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.084700   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.084867   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.085074   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.085266   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.085509   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.085525   79191 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:03.186066   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:03.186094   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186368   79191 buildroot.go:166] provisioning hostname "old-k8s-version-098619"
	I0816 00:34:03.186397   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186597   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.189330   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189658   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.189702   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189792   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.190004   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190185   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190344   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.190481   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.190665   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.190688   79191 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098619 && echo "old-k8s-version-098619" | sudo tee /etc/hostname
	I0816 00:34:03.304585   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098619
	
	I0816 00:34:03.304608   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.307415   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307732   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.307763   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307955   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.308155   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308314   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308474   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.308629   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.308795   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.308811   79191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098619/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:03.418968   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:03.419010   79191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:03.419045   79191 buildroot.go:174] setting up certificates
	I0816 00:34:03.419058   79191 provision.go:84] configureAuth start
	I0816 00:34:03.419072   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.419338   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.421799   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422159   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.422198   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422401   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.425023   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425417   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.425445   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425557   79191 provision.go:143] copyHostCerts
	I0816 00:34:03.425624   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:03.425646   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:03.425717   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:03.425875   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:03.425888   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:03.425921   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:03.426007   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:03.426017   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:03.426045   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:03.426112   79191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098619 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-098619]
	I0816 00:34:03.509869   79191 provision.go:177] copyRemoteCerts
	I0816 00:34:03.509932   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:03.509961   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.512603   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.512938   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.512984   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.513163   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.513451   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.513617   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.513777   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:03.596330   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 00:34:03.621969   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:03.646778   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:03.671937   79191 provision.go:87] duration metric: took 252.867793ms to configureAuth
	I0816 00:34:03.671964   79191 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:03.672149   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:34:03.672250   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.675207   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675600   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.675625   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675787   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.676006   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676199   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676360   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.676549   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.676762   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.676779   79191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:03.945259   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:03.945287   79191 machine.go:96] duration metric: took 863.526642ms to provisionDockerMachine
	I0816 00:34:03.945298   79191 start.go:293] postStartSetup for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:34:03.945308   79191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:03.945335   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.945638   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:03.945666   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.948590   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.948967   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.948989   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.949152   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.949350   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.949491   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.949645   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.028994   79191 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:04.033776   79191 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:04.033799   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:04.033872   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:04.033943   79191 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:04.034033   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:04.045492   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:04.071879   79191 start.go:296] duration metric: took 126.569157ms for postStartSetup
	I0816 00:34:04.071920   79191 fix.go:56] duration metric: took 19.817260263s for fixHost
	I0816 00:34:04.071944   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.074942   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.075325   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075504   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.075699   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075846   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075977   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.076146   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:04.076319   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:04.076332   79191 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:04.178483   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768444.133390375
	
	I0816 00:34:04.178510   79191 fix.go:216] guest clock: 1723768444.133390375
	I0816 00:34:04.178519   79191 fix.go:229] Guest: 2024-08-16 00:34:04.133390375 +0000 UTC Remote: 2024-08-16 00:34:04.071925107 +0000 UTC m=+252.320651106 (delta=61.465268ms)
	I0816 00:34:04.178537   79191 fix.go:200] guest clock delta is within tolerance: 61.465268ms
	I0816 00:34:04.178541   79191 start.go:83] releasing machines lock for "old-k8s-version-098619", held for 19.923923778s
	I0816 00:34:04.178567   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.178875   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:04.181999   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182458   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.182490   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183192   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183357   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183412   79191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:04.183461   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.183553   79191 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:04.183575   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.186192   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186418   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186507   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186531   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186679   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.186811   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186836   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186850   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187016   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187032   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.187211   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187215   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.187364   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187488   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.283880   79191 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:04.289798   79191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:04.436822   79191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:04.443547   79191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:04.443631   79191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:04.464783   79191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:04.464807   79191 start.go:495] detecting cgroup driver to use...
	I0816 00:34:04.464873   79191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:04.481504   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:04.501871   79191 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:04.501942   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:04.521898   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:04.538186   79191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:04.704361   79191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:04.881682   79191 docker.go:233] disabling docker service ...
	I0816 00:34:04.881757   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:04.900264   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:04.916152   79191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:05.048440   79191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:05.166183   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:05.181888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:05.202525   79191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:34:05.202592   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.214655   79191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:05.214712   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.226052   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.236878   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.249217   79191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:05.260362   79191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:05.271039   79191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:05.271108   79191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:05.290423   79191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:05.307175   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:05.465815   79191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:05.640787   79191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:05.640878   79191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:05.646821   79191 start.go:563] Will wait 60s for crictl version
	I0816 00:34:05.646883   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:05.651455   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:05.698946   79191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:05.699037   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.729185   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.772063   79191 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:34:05.773406   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:05.776689   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777177   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:05.777241   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777435   79191 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:05.782377   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:05.797691   79191 kubeadm.go:883] updating cluster {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:05.797872   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:34:05.797953   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:05.861468   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:05.861557   79191 ssh_runner.go:195] Run: which lz4
	I0816 00:34:05.866880   79191 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:34:05.872036   79191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:34:05.872071   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:34:04.202120   78489 main.go:141] libmachine: (no-preload-819398) Calling .Start
	I0816 00:34:04.202293   78489 main.go:141] libmachine: (no-preload-819398) Ensuring networks are active...
	I0816 00:34:04.203062   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network default is active
	I0816 00:34:04.203345   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network mk-no-preload-819398 is active
	I0816 00:34:04.205286   78489 main.go:141] libmachine: (no-preload-819398) Getting domain xml...
	I0816 00:34:04.206025   78489 main.go:141] libmachine: (no-preload-819398) Creating domain...
	I0816 00:34:05.553661   78489 main.go:141] libmachine: (no-preload-819398) Waiting to get IP...
	I0816 00:34:05.554629   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.555210   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.555309   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.555211   80407 retry.go:31] will retry after 298.759084ms: waiting for machine to come up
	I0816 00:34:05.856046   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.856571   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.856604   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.856530   80407 retry.go:31] will retry after 293.278331ms: waiting for machine to come up
	I0816 00:34:06.151110   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.151542   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.151571   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.151498   80407 retry.go:31] will retry after 332.472371ms: waiting for machine to come up
	I0816 00:34:06.485927   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.486487   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.486514   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.486459   80407 retry.go:31] will retry after 600.720276ms: waiting for machine to come up
	I0816 00:34:05.926954   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.929140   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:06.972334   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:07.469652   78747 node_ready.go:49] node "default-k8s-diff-port-616827" has status "Ready":"True"
	I0816 00:34:07.469684   78747 node_ready.go:38] duration metric: took 7.004536271s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:07.469700   78747 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:07.476054   78747 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482839   78747 pod_ready.go:93] pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.482861   78747 pod_ready.go:82] duration metric: took 6.779315ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482871   78747 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489325   78747 pod_ready.go:93] pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.489348   78747 pod_ready.go:82] duration metric: took 6.470629ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489357   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495536   78747 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.495555   78747 pod_ready.go:82] duration metric: took 6.192295ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495565   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:09.503258   78747 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.631328   79191 crio.go:462] duration metric: took 1.76448771s to copy over tarball
	I0816 00:34:07.631413   79191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:34:10.662435   79191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.030990355s)
	I0816 00:34:10.662472   79191 crio.go:469] duration metric: took 3.031115615s to extract the tarball
	I0816 00:34:10.662482   79191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:34:10.707627   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:10.745704   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:10.745742   79191 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.745838   79191 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.745914   79191 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.745860   79191 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.745943   79191 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.745884   79191 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.746059   79191 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747781   79191 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.747803   79191 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.747808   79191 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.747824   79191 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.747842   79191 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.747883   79191 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.747895   79191 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747948   79191 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.916488   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.923947   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.931668   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.942764   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.948555   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.957593   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.970039   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:34:11.012673   79191 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:34:11.012707   79191 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.012778   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.026267   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:11.135366   79191 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:34:11.135398   79191 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.135451   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.149180   79191 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:34:11.149226   79191 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.149271   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183480   79191 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:34:11.183526   79191 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.183526   79191 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:34:11.183578   79191 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.183584   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183637   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186513   79191 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:34:11.186559   79191 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.186622   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186632   79191 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:34:11.186658   79191 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:34:11.186699   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186722   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.252857   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.252914   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.252935   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.253007   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.253012   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.253083   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.253140   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420527   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.420559   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.420564   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.420638   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420732   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.420791   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.420813   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591141   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.591197   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.591267   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.591337   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.591418   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591453   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.591505   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:34:11.721234   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:34:11.725967   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:34:11.731189   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:34:11.731276   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:34:11.742195   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:34:11.742224   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:34:11.742265   79191 cache_images.go:92] duration metric: took 996.507737ms to LoadCachedImages
	W0816 00:34:11.742327   79191 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0816 00:34:11.742342   79191 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0816 00:34:11.742464   79191 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-098619 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:11.742546   79191 ssh_runner.go:195] Run: crio config
	I0816 00:34:07.089462   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.090073   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.090099   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.089985   80407 retry.go:31] will retry after 666.260439ms: waiting for machine to come up
	I0816 00:34:07.757621   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.758156   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.758182   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.758105   80407 retry.go:31] will retry after 782.571604ms: waiting for machine to come up
	I0816 00:34:08.542021   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:08.542426   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:08.542475   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:08.542381   80407 retry.go:31] will retry after 840.347921ms: waiting for machine to come up
	I0816 00:34:09.384399   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:09.384866   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:09.384893   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:09.384824   80407 retry.go:31] will retry after 1.376690861s: waiting for machine to come up
	I0816 00:34:10.763158   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:10.763547   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:10.763573   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:10.763484   80407 retry.go:31] will retry after 1.237664711s: waiting for machine to come up
	I0816 00:34:10.426656   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:12.429312   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.354758   78747 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.354783   78747 pod_ready.go:82] duration metric: took 3.859210458s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.354796   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363323   78747 pod_ready.go:93] pod "kube-proxy-f99ds" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.363347   78747 pod_ready.go:82] duration metric: took 8.543406ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363359   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369799   78747 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.369826   78747 pod_ready.go:82] duration metric: took 6.458192ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369858   78747 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:13.376479   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.791749   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:34:11.791779   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:11.791791   79191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:11.791810   79191 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098619 NodeName:old-k8s-version-098619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:34:11.791969   79191 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-098619"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:11.792046   79191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:34:11.802572   79191 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:11.802649   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:11.812583   79191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 00:34:11.831551   79191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:11.852476   79191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 00:34:11.875116   79191 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:11.879833   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:11.893308   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:12.038989   79191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:12.061736   79191 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619 for IP: 192.168.72.137
	I0816 00:34:12.061761   79191 certs.go:194] generating shared ca certs ...
	I0816 00:34:12.061780   79191 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.061992   79191 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:12.062046   79191 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:12.062059   79191 certs.go:256] generating profile certs ...
	I0816 00:34:12.062193   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key
	I0816 00:34:12.062283   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4
	I0816 00:34:12.062343   79191 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key
	I0816 00:34:12.062485   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:12.062523   79191 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:12.062536   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:12.062579   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:12.062614   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:12.062658   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:12.062721   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:12.063630   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:12.106539   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:12.139393   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:12.171548   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:12.213113   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 00:34:12.244334   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:34:12.287340   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:12.331047   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:34:12.369666   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:12.397260   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:12.424009   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:12.450212   79191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:12.471550   79191 ssh_runner.go:195] Run: openssl version
	I0816 00:34:12.479821   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:12.494855   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500546   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500620   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.508817   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:12.521689   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:12.533904   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538789   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538946   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.546762   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:12.561940   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:12.575852   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582377   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582457   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.590772   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:12.604976   79191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:12.610332   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:12.617070   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:12.625769   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:12.634342   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:12.641486   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:12.650090   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:12.658206   79191 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:12.658306   79191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:12.658392   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.703323   79191 cri.go:89] found id: ""
	I0816 00:34:12.703399   79191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:12.714950   79191 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:12.714970   79191 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:12.715047   79191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:12.727051   79191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:12.728059   79191 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:12.728655   79191 kubeconfig.go:62] /home/jenkins/minikube-integration/19452-12919/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098619" cluster setting kubeconfig missing "old-k8s-version-098619" context setting]
	I0816 00:34:12.729552   79191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.731269   79191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:12.744732   79191 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0816 00:34:12.744766   79191 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:12.744777   79191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:12.744833   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.783356   79191 cri.go:89] found id: ""
	I0816 00:34:12.783432   79191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:12.801942   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:12.816412   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:12.816433   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:12.816480   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:12.827686   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:12.827757   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:12.838063   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:12.847714   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:12.847808   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:12.858274   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.869328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:12.869389   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.881457   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:12.892256   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:12.892325   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:12.902115   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:12.912484   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.040145   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.851639   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.085396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.208430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.321003   79191 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:14.321084   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:14.822130   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.321780   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.822121   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:16.322077   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:12.002977   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:12.003441   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:12.003470   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:12.003401   80407 retry.go:31] will retry after 1.413320186s: waiting for machine to come up
	I0816 00:34:13.418972   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:13.419346   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:13.419374   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:13.419284   80407 retry.go:31] will retry after 2.055525842s: waiting for machine to come up
	I0816 00:34:15.476550   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:15.477044   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:15.477072   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:15.477021   80407 retry.go:31] will retry after 2.728500649s: waiting for machine to come up
	I0816 00:34:14.926133   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.930322   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:15.377291   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:17.877627   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.821714   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.321166   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.821648   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.321711   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.821520   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.321732   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.821325   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.321783   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.821958   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:21.321139   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.208958   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:18.209350   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:18.209379   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:18.209302   80407 retry.go:31] will retry after 3.922749943s: waiting for machine to come up
	I0816 00:34:19.426265   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.926480   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.134804   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135230   78489 main.go:141] libmachine: (no-preload-819398) Found IP for machine: 192.168.61.15
	I0816 00:34:22.135266   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has current primary IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135292   78489 main.go:141] libmachine: (no-preload-819398) Reserving static IP address...
	I0816 00:34:22.135596   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.135629   78489 main.go:141] libmachine: (no-preload-819398) DBG | skip adding static IP to network mk-no-preload-819398 - found existing host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"}
	I0816 00:34:22.135644   78489 main.go:141] libmachine: (no-preload-819398) Reserved static IP address: 192.168.61.15
	I0816 00:34:22.135661   78489 main.go:141] libmachine: (no-preload-819398) Waiting for SSH to be available...
	I0816 00:34:22.135675   78489 main.go:141] libmachine: (no-preload-819398) DBG | Getting to WaitForSSH function...
	I0816 00:34:22.137639   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.137925   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.137956   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.138099   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH client type: external
	I0816 00:34:22.138141   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa (-rw-------)
	I0816 00:34:22.138198   78489 main.go:141] libmachine: (no-preload-819398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:22.138233   78489 main.go:141] libmachine: (no-preload-819398) DBG | About to run SSH command:
	I0816 00:34:22.138248   78489 main.go:141] libmachine: (no-preload-819398) DBG | exit 0
	I0816 00:34:22.262094   78489 main.go:141] libmachine: (no-preload-819398) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:22.262496   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetConfigRaw
	I0816 00:34:22.263081   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.265419   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.265746   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.265782   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.266097   78489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/config.json ...
	I0816 00:34:22.266283   78489 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:22.266301   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:22.266501   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.268848   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269269   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.269308   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269356   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.269537   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269684   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269803   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.269971   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.270185   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.270197   78489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:22.374848   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:22.374880   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375169   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:34:22.375195   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375407   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.378309   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378649   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.378678   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378853   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.379060   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379203   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379362   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.379568   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.379735   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.379749   78489 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-819398 && echo "no-preload-819398" | sudo tee /etc/hostname
	I0816 00:34:22.496438   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-819398
	
	I0816 00:34:22.496467   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.499101   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499411   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.499443   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499703   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.499912   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500116   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500247   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.500419   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.500624   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.500650   78489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-819398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-819398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-819398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:22.619769   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:22.619802   78489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:22.619826   78489 buildroot.go:174] setting up certificates
	I0816 00:34:22.619837   78489 provision.go:84] configureAuth start
	I0816 00:34:22.619847   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.620106   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.623130   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623485   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.623510   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623629   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.625964   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626308   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.626335   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626475   78489 provision.go:143] copyHostCerts
	I0816 00:34:22.626536   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:22.626557   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:22.626629   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:22.626756   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:22.626768   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:22.626798   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:22.626889   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:22.626899   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:22.626925   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:22.627008   78489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.no-preload-819398 san=[127.0.0.1 192.168.61.15 localhost minikube no-preload-819398]
	I0816 00:34:22.710036   78489 provision.go:177] copyRemoteCerts
	I0816 00:34:22.710093   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:22.710120   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.712944   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713380   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.713409   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713612   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.713780   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.713926   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.714082   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:22.800996   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:22.828264   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:34:22.855258   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:22.880981   78489 provision.go:87] duration metric: took 261.134406ms to configureAuth
	I0816 00:34:22.881013   78489 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:22.881176   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:22.881240   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.883962   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884348   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.884368   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884611   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.884828   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885052   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885248   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.885448   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.885639   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.885661   78489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:23.154764   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:23.154802   78489 machine.go:96] duration metric: took 888.504728ms to provisionDockerMachine
	I0816 00:34:23.154821   78489 start.go:293] postStartSetup for "no-preload-819398" (driver="kvm2")
	I0816 00:34:23.154837   78489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:23.154860   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.155176   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:23.155205   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.158105   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158482   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.158517   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158674   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.158864   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.159039   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.159198   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.241041   78489 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:23.245237   78489 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:23.245260   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:23.245324   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:23.245398   78489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:23.245480   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:23.254735   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:23.279620   78489 start.go:296] duration metric: took 124.783636ms for postStartSetup
	I0816 00:34:23.279668   78489 fix.go:56] duration metric: took 19.100951861s for fixHost
	I0816 00:34:23.279693   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.282497   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.282959   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.282981   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.283184   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.283376   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283514   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283687   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.283870   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:23.284027   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:23.284037   78489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:23.390632   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768463.360038650
	
	I0816 00:34:23.390658   78489 fix.go:216] guest clock: 1723768463.360038650
	I0816 00:34:23.390668   78489 fix.go:229] Guest: 2024-08-16 00:34:23.36003865 +0000 UTC Remote: 2024-08-16 00:34:23.27967333 +0000 UTC m=+356.445975156 (delta=80.36532ms)
	I0816 00:34:23.390697   78489 fix.go:200] guest clock delta is within tolerance: 80.36532ms
	I0816 00:34:23.390710   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 19.212026147s
	I0816 00:34:23.390729   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.390977   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:23.393728   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394050   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.394071   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394255   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394722   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394895   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394977   78489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:23.395028   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.395135   78489 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:23.395151   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.397773   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.397939   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398196   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398237   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398354   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398480   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398507   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398515   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398717   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.398722   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398887   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398884   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.399029   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.399164   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.497983   78489 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:23.503896   78489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:23.660357   78489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:23.666714   78489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:23.666775   78489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:23.684565   78489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:23.684586   78489 start.go:495] detecting cgroup driver to use...
	I0816 00:34:23.684655   78489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:23.701981   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:23.715786   78489 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:23.715852   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:23.733513   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:23.748705   78489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:23.866341   78489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:24.016845   78489 docker.go:233] disabling docker service ...
	I0816 00:34:24.016918   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:24.032673   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:24.046465   78489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:24.184862   78489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:24.309066   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:24.323818   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:24.344352   78489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:34:24.344422   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.355015   78489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:24.355093   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.365665   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.377238   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.388619   78489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:24.399306   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.410087   78489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.428465   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.439026   78489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:24.448856   78489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:24.448943   78489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:24.463002   78489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:24.473030   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:24.587542   78489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:24.719072   78489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:24.719159   78489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:24.723789   78489 start.go:563] Will wait 60s for crictl version
	I0816 00:34:24.723842   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:24.727616   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:24.766517   78489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:24.766600   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.795204   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.824529   78489 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:34:20.376278   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.376510   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:24.876314   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.822114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.321350   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.821541   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.322014   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.821938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.321883   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.821178   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.321881   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.821199   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:26.321573   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.825725   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:24.828458   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829018   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:24.829045   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829336   78489 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:24.833711   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:24.847017   78489 kubeadm.go:883] updating cluster {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:24.847136   78489 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:34:24.847171   78489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:24.883489   78489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:34:24.883515   78489 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:24.883592   78489 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.883612   78489 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.883664   78489 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:24.883690   78489 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.883719   78489 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.883595   78489 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.883927   78489 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.884016   78489 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885061   78489 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.885185   78489 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885207   78489 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.885204   78489 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.885225   78489 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.042311   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.042317   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.048181   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.050502   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.059137   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.091688   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 00:34:25.096653   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.126261   78489 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 00:34:25.126311   78489 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.126368   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.164673   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.189972   78489 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 00:34:25.190014   78489 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.190051   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249632   78489 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 00:34:25.249674   78489 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.249717   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249780   78489 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 00:34:25.249824   78489 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.249884   78489 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 00:34:25.249910   78489 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.249887   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249942   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360038   78489 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 00:34:25.360082   78489 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.360121   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360133   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.360191   78489 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 00:34:25.360208   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.360221   78489 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.360256   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360283   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.360326   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.360337   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.462610   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.462691   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.480037   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.480114   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.480176   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.480211   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.489343   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.642853   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.642913   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.642963   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.645719   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.645749   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.645833   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.645899   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.802574   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 00:34:25.802645   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.802687   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.802728   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.808235   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 00:34:25.808330   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 00:34:25.808387   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 00:34:25.808401   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 00:34:25.808432   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:25.808334   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:25.808471   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:25.808480   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:25.816510   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 00:34:25.816527   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.816560   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.885445   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 00:34:25.885532   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 00:34:25.885549   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:25.885588   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 00:34:25.885600   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:25.885674   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 00:34:25.885690   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 00:34:25.885711   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 00:34:24.426102   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.927534   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.877013   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:29.378108   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.821489   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.322094   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.321201   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.821854   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.321188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.821729   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.321316   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.821998   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:31.322184   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.938767   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.122182459s)
	I0816 00:34:27.938804   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 00:34:27.938801   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.05323098s)
	I0816 00:34:27.938826   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.05321158s)
	I0816 00:34:27.938831   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 00:34:27.938833   78489 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:27.938843   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 00:34:27.938906   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:31.645449   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.706515577s)
	I0816 00:34:31.645486   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 00:34:31.645514   78489 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:31.645563   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:29.427463   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.927253   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.875608   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:33.876822   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.821361   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.321205   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.822088   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.322126   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.821956   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.321921   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.821245   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.822034   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:36.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.625714   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.980118908s)
	I0816 00:34:33.625749   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 00:34:33.625773   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:33.625824   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:35.680134   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054281396s)
	I0816 00:34:35.680167   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 00:34:35.680209   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:35.680276   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:34.426416   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.427589   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:38.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:35.877327   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:37.877385   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.821567   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.321329   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.822169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.321832   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.821404   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.321406   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.821914   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.322169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.821149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:41.322125   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.430152   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.749849436s)
	I0816 00:34:37.430180   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 00:34:37.430208   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:37.430254   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:39.684335   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.254047221s)
	I0816 00:34:39.684365   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 00:34:39.684391   78489 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:39.684445   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:40.328672   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 00:34:40.328722   78489 cache_images.go:123] Successfully loaded all cached images
	I0816 00:34:40.328729   78489 cache_images.go:92] duration metric: took 15.445200533s to LoadCachedImages
	I0816 00:34:40.328743   78489 kubeadm.go:934] updating node { 192.168.61.15 8443 v1.31.0 crio true true} ...
	I0816 00:34:40.328897   78489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-819398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:40.328994   78489 ssh_runner.go:195] Run: crio config
	I0816 00:34:40.383655   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:40.383675   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:40.383685   78489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:40.383712   78489 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.15 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-819398 NodeName:no-preload-819398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:34:40.383855   78489 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-819398"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:40.383930   78489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:34:40.395384   78489 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:40.395457   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:40.405037   78489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 00:34:40.423278   78489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:40.440963   78489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 00:34:40.458845   78489 ssh_runner.go:195] Run: grep 192.168.61.15	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:40.462574   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:40.475524   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:40.614624   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:40.632229   78489 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398 for IP: 192.168.61.15
	I0816 00:34:40.632252   78489 certs.go:194] generating shared ca certs ...
	I0816 00:34:40.632267   78489 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:40.632430   78489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:40.632483   78489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:40.632497   78489 certs.go:256] generating profile certs ...
	I0816 00:34:40.632598   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/client.key
	I0816 00:34:40.632679   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key.a9de72ef
	I0816 00:34:40.632759   78489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key
	I0816 00:34:40.632919   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:40.632962   78489 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:40.632978   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:40.633011   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:40.633042   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:40.633068   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:40.633124   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:40.633963   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:40.676094   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:40.707032   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:40.740455   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:40.778080   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 00:34:40.809950   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:34:40.841459   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:40.866708   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:34:40.893568   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:40.917144   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:40.942349   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:40.966731   78489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:40.984268   78489 ssh_runner.go:195] Run: openssl version
	I0816 00:34:40.990614   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:41.002909   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007595   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007645   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.013618   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:41.024886   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:41.036350   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040801   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040845   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.046554   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:41.057707   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:41.069566   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074107   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074159   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.080113   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:41.091854   78489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:41.096543   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:41.102883   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:41.109228   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:41.115622   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:41.121895   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:41.128016   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:41.134126   78489 kubeadm.go:392] StartCluster: {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:41.134230   78489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:41.134310   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.178898   78489 cri.go:89] found id: ""
	I0816 00:34:41.178972   78489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:41.190167   78489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:41.190184   78489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:41.190223   78489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:41.200385   78489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:41.201824   78489 kubeconfig.go:125] found "no-preload-819398" server: "https://192.168.61.15:8443"
	I0816 00:34:41.204812   78489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:41.225215   78489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.15
	I0816 00:34:41.225252   78489 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:41.225265   78489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:41.225323   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.269288   78489 cri.go:89] found id: ""
	I0816 00:34:41.269377   78489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:41.286238   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:41.297713   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:41.297732   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:41.297782   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:41.308635   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:41.308695   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:41.320045   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:41.329866   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:41.329952   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:41.341488   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.351018   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:41.351083   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.360845   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:41.370730   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:41.370808   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:41.382572   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:41.392544   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.515558   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.425671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:43.426507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:40.377638   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:42.877395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:41.821459   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.321938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.822038   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.321447   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.821571   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.321428   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.821496   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:46.322149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.610068   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.094473643s)
	I0816 00:34:42.610106   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.850562   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.916519   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:43.042025   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:43.042117   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.543065   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.043098   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.061154   78489 api_server.go:72] duration metric: took 1.019134992s to wait for apiserver process to appear ...
	I0816 00:34:44.061180   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:34:44.061199   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.718683   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.718717   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:46.718730   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.785528   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.785559   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:47.061692   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.066556   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.066590   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:47.562057   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.569664   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.569699   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:48.061258   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:48.065926   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:34:48.073136   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:34:48.073165   78489 api_server.go:131] duration metric: took 4.011977616s to wait for apiserver health ...
	I0816 00:34:48.073179   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:48.073189   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:48.075105   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:34:45.925817   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.925984   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:45.376424   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.377794   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.876764   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:46.822140   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.321575   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.321365   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.822009   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.321536   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.821189   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.321387   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.821982   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:51.322075   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.076340   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:34:48.113148   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:34:48.152316   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:34:48.166108   78489 system_pods.go:59] 8 kube-system pods found
	I0816 00:34:48.166142   78489 system_pods.go:61] "coredns-6f6b679f8f-sv454" [5ba1d55f-4455-4ad1-b3c8-7671ce481dd2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:34:48.166154   78489 system_pods.go:61] "etcd-no-preload-819398" [b5e55df3-fb20-4980-928f-31217bf25351] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:34:48.166164   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [7670f41c-8439-4782-a3c8-077a144d2998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:34:48.166175   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [61a6080a-5e65-4400-b230-0703f347fc17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:34:48.166182   78489 system_pods.go:61] "kube-proxy-xdm7w" [9d0517c5-8cf7-47a0-86d0-c674677e9f46] Running
	I0816 00:34:48.166191   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [af346e37-312a-4225-b3bf-0ddda71022dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:34:48.166204   78489 system_pods.go:61] "metrics-server-6867b74b74-mm5l7" [2ebc3f9f-e1a7-47b6-849e-6a4995d13206] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:34:48.166214   78489 system_pods.go:61] "storage-provisioner" [745bbfbd-aedb-4e68-946e-5a7ead1d5b48] Running
	I0816 00:34:48.166223   78489 system_pods.go:74] duration metric: took 13.883212ms to wait for pod list to return data ...
	I0816 00:34:48.166235   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:34:48.170444   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:34:48.170478   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:34:48.170492   78489 node_conditions.go:105] duration metric: took 4.251703ms to run NodePressure ...
	I0816 00:34:48.170520   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:48.437519   78489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:34:48.441992   78489 kubeadm.go:739] kubelet initialised
	I0816 00:34:48.442015   78489 kubeadm.go:740] duration metric: took 4.465986ms waiting for restarted kubelet to initialise ...
	I0816 00:34:48.442025   78489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:48.447127   78489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:50.453956   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.926184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.926515   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.876909   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.376236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.822066   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.321534   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.821154   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.321256   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.821510   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.321984   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.821175   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.321601   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:56.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.454122   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.954716   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.426224   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.926472   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.376394   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:58.876502   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.821891   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.321266   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.821346   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.321718   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.821304   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.821302   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.821563   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:01.321323   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.453951   78489 pod_ready.go:93] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:57.453974   78489 pod_ready.go:82] duration metric: took 9.00682228s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:57.453983   78489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.460582   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.961243   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:00.961269   78489 pod_ready.go:82] duration metric: took 3.507278873s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:00.961279   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468020   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:01.468047   78489 pod_ready.go:82] duration metric: took 506.758881ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468060   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.425956   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.925967   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.876678   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:03.376662   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.821317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.321560   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.821707   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.322110   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.821327   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.321430   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.821935   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.321559   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.821373   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.975498   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.975522   78489 pod_ready.go:82] duration metric: took 1.50745395s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.975531   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980290   78489 pod_ready.go:93] pod "kube-proxy-xdm7w" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.980316   78489 pod_ready.go:82] duration metric: took 4.778704ms for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980328   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988237   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.988260   78489 pod_ready.go:82] duration metric: took 7.924207ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988268   78489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:04.993992   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:04.426419   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.426648   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.927578   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:05.877102   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:07.877187   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.821405   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.321781   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.821420   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.321483   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.821347   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.321167   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.821188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.821179   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:11.322114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.994539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.995530   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.494248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.425605   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:13.426338   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:10.378729   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:12.875673   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:14.876717   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.822105   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.321963   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.822172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.321805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.821971   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:14.321784   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:14.321882   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:14.360939   79191 cri.go:89] found id: ""
	I0816 00:35:14.360962   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.360971   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:14.360976   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:14.361028   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:14.397796   79191 cri.go:89] found id: ""
	I0816 00:35:14.397824   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.397836   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:14.397858   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:14.397922   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:14.433924   79191 cri.go:89] found id: ""
	I0816 00:35:14.433950   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.433960   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:14.433968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:14.434024   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:14.468657   79191 cri.go:89] found id: ""
	I0816 00:35:14.468685   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.468696   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:14.468704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:14.468770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:14.505221   79191 cri.go:89] found id: ""
	I0816 00:35:14.505247   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.505256   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:14.505264   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:14.505323   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:14.546032   79191 cri.go:89] found id: ""
	I0816 00:35:14.546062   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.546072   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:14.546079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:14.546147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:14.581260   79191 cri.go:89] found id: ""
	I0816 00:35:14.581284   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.581292   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:14.581298   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:14.581352   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:14.616103   79191 cri.go:89] found id: ""
	I0816 00:35:14.616127   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.616134   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:14.616142   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:14.616153   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:14.690062   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:14.690106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:14.735662   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:14.735699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:14.786049   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:14.786086   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:14.800375   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:14.800405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:14.931822   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:13.494676   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.497759   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.925671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.926279   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.375842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.376005   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.432686   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:17.448728   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:17.448806   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:17.496384   79191 cri.go:89] found id: ""
	I0816 00:35:17.496523   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.496568   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:17.496581   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:17.496646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:17.560779   79191 cri.go:89] found id: ""
	I0816 00:35:17.560810   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.560820   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:17.560829   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:17.560891   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:17.606007   79191 cri.go:89] found id: ""
	I0816 00:35:17.606036   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.606047   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:17.606054   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:17.606123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:17.639910   79191 cri.go:89] found id: ""
	I0816 00:35:17.639937   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.639945   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:17.639951   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:17.640030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:17.676534   79191 cri.go:89] found id: ""
	I0816 00:35:17.676563   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.676573   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:17.676581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:17.676645   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:17.716233   79191 cri.go:89] found id: ""
	I0816 00:35:17.716255   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.716262   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:17.716268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:17.716334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:17.753648   79191 cri.go:89] found id: ""
	I0816 00:35:17.753686   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.753696   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:17.753704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:17.753763   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:17.791670   79191 cri.go:89] found id: ""
	I0816 00:35:17.791694   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.791702   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:17.791711   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:17.791722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:17.840616   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:17.840650   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:17.854949   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:17.854981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:17.933699   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:17.933724   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:17.933750   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:18.010177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:18.010211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:20.551384   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:20.564463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:20.564540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:20.604361   79191 cri.go:89] found id: ""
	I0816 00:35:20.604389   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.604399   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:20.604405   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:20.604453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:20.639502   79191 cri.go:89] found id: ""
	I0816 00:35:20.639528   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.639535   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:20.639541   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:20.639590   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:20.676430   79191 cri.go:89] found id: ""
	I0816 00:35:20.676476   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.676484   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:20.676496   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:20.676551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:20.711213   79191 cri.go:89] found id: ""
	I0816 00:35:20.711243   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.711253   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:20.711261   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:20.711320   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:20.745533   79191 cri.go:89] found id: ""
	I0816 00:35:20.745563   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.745574   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:20.745581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:20.745644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:20.781031   79191 cri.go:89] found id: ""
	I0816 00:35:20.781056   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.781064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:20.781071   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:20.781119   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:20.819966   79191 cri.go:89] found id: ""
	I0816 00:35:20.819994   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.820005   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:20.820012   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:20.820096   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:20.859011   79191 cri.go:89] found id: ""
	I0816 00:35:20.859041   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.859052   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:20.859063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:20.859078   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:20.909479   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:20.909513   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:20.925627   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:20.925653   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:21.001707   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:21.001733   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:21.001747   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:21.085853   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:21.085893   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:17.994492   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:20.496255   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:22.426663   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:21.878587   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.377462   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:23.626499   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:23.640337   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:23.640395   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:23.679422   79191 cri.go:89] found id: ""
	I0816 00:35:23.679449   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.679457   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:23.679463   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:23.679522   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:23.716571   79191 cri.go:89] found id: ""
	I0816 00:35:23.716594   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.716601   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:23.716607   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:23.716660   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:23.752539   79191 cri.go:89] found id: ""
	I0816 00:35:23.752563   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.752573   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:23.752581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:23.752640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:23.790665   79191 cri.go:89] found id: ""
	I0816 00:35:23.790693   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.790700   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:23.790707   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:23.790757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:23.827695   79191 cri.go:89] found id: ""
	I0816 00:35:23.827719   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.827727   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:23.827733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:23.827792   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:23.867664   79191 cri.go:89] found id: ""
	I0816 00:35:23.867687   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.867695   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:23.867701   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:23.867776   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:23.907844   79191 cri.go:89] found id: ""
	I0816 00:35:23.907871   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.907882   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:23.907890   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:23.907951   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:23.945372   79191 cri.go:89] found id: ""
	I0816 00:35:23.945403   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.945414   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:23.945424   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:23.945438   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:23.998270   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:23.998302   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:24.012794   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:24.012824   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:24.087285   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:24.087308   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:24.087340   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:24.167151   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:24.167184   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:26.710285   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:26.724394   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:26.724453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:26.764667   79191 cri.go:89] found id: ""
	I0816 00:35:26.764690   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.764698   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:26.764704   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:26.764756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:22.994036   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.995035   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.927042   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:27.426054   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.877007   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.376563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.806631   79191 cri.go:89] found id: ""
	I0816 00:35:26.806660   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.806670   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:26.806677   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:26.806741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:26.843434   79191 cri.go:89] found id: ""
	I0816 00:35:26.843473   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.843485   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:26.843493   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:26.843576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:26.882521   79191 cri.go:89] found id: ""
	I0816 00:35:26.882556   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.882566   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:26.882574   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:26.882635   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:26.917956   79191 cri.go:89] found id: ""
	I0816 00:35:26.917985   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.917995   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:26.918004   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:26.918056   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:26.953168   79191 cri.go:89] found id: ""
	I0816 00:35:26.953191   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.953199   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:26.953205   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:26.953251   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:26.991366   79191 cri.go:89] found id: ""
	I0816 00:35:26.991397   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.991408   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:26.991416   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:26.991479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:27.028591   79191 cri.go:89] found id: ""
	I0816 00:35:27.028619   79191 logs.go:276] 0 containers: []
	W0816 00:35:27.028626   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:27.028635   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:27.028647   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:27.111613   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:27.111645   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:27.153539   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:27.153575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:27.209377   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:27.209420   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:27.223316   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:27.223343   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:27.301411   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:29.801803   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:29.815545   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:29.815626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:29.853638   79191 cri.go:89] found id: ""
	I0816 00:35:29.853668   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.853678   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:29.853687   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:29.853756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:29.892532   79191 cri.go:89] found id: ""
	I0816 00:35:29.892554   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.892561   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:29.892567   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:29.892622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:29.932486   79191 cri.go:89] found id: ""
	I0816 00:35:29.932511   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.932519   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:29.932524   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:29.932580   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:29.973161   79191 cri.go:89] found id: ""
	I0816 00:35:29.973194   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.973205   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:29.973213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:29.973275   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:30.009606   79191 cri.go:89] found id: ""
	I0816 00:35:30.009629   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.009637   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:30.009643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:30.009691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:30.045016   79191 cri.go:89] found id: ""
	I0816 00:35:30.045043   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.045050   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:30.045057   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:30.045113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:30.079934   79191 cri.go:89] found id: ""
	I0816 00:35:30.079959   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.079968   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:30.079974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:30.080030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:30.114173   79191 cri.go:89] found id: ""
	I0816 00:35:30.114199   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.114207   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:30.114216   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:30.114227   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:30.154765   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:30.154791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:30.204410   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:30.204442   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:30.218909   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:30.218934   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:30.294141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:30.294161   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:30.294193   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:26.995394   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.494569   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.426234   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.926349   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.926433   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.376976   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.377869   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:32.872216   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:32.886211   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:32.886289   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:32.929416   79191 cri.go:89] found id: ""
	I0816 00:35:32.929440   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.929449   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:32.929456   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:32.929520   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:32.977862   79191 cri.go:89] found id: ""
	I0816 00:35:32.977887   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.977896   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:32.977920   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:32.977978   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:33.015569   79191 cri.go:89] found id: ""
	I0816 00:35:33.015593   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.015603   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:33.015622   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:33.015681   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:33.050900   79191 cri.go:89] found id: ""
	I0816 00:35:33.050934   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.050943   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:33.050959   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:33.051033   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:33.084529   79191 cri.go:89] found id: ""
	I0816 00:35:33.084556   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.084564   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:33.084569   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:33.084619   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:33.119819   79191 cri.go:89] found id: ""
	I0816 00:35:33.119845   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.119855   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:33.119863   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:33.119928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:33.159922   79191 cri.go:89] found id: ""
	I0816 00:35:33.159952   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.159959   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:33.159965   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:33.160023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:33.194977   79191 cri.go:89] found id: ""
	I0816 00:35:33.195006   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.195018   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:33.195030   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:33.195044   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:33.208578   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:33.208623   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:33.282177   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:33.282198   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:33.282211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:33.365514   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:33.365552   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:33.405190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:33.405226   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:35.959033   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:35.971866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:35.971934   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:36.008442   79191 cri.go:89] found id: ""
	I0816 00:35:36.008473   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.008483   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:36.008489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:36.008547   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:36.044346   79191 cri.go:89] found id: ""
	I0816 00:35:36.044374   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.044386   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:36.044393   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:36.044444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:36.083078   79191 cri.go:89] found id: ""
	I0816 00:35:36.083104   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.083112   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:36.083118   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:36.083166   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:36.120195   79191 cri.go:89] found id: ""
	I0816 00:35:36.120218   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.120226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:36.120232   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:36.120288   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:36.156186   79191 cri.go:89] found id: ""
	I0816 00:35:36.156215   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.156225   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:36.156233   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:36.156295   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:36.195585   79191 cri.go:89] found id: ""
	I0816 00:35:36.195613   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.195623   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:36.195631   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:36.195699   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:36.231110   79191 cri.go:89] found id: ""
	I0816 00:35:36.231133   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.231141   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:36.231147   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:36.231210   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:36.268745   79191 cri.go:89] found id: ""
	I0816 00:35:36.268770   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.268778   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:36.268786   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:36.268800   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:36.282225   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:36.282251   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:36.351401   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:36.351431   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:36.351447   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:36.429970   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:36.430003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:36.473745   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:36.473776   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:31.994163   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.994256   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.995188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:36.427247   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.926123   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.877303   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:39.027444   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:39.041107   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:39.041170   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:39.079807   79191 cri.go:89] found id: ""
	I0816 00:35:39.079830   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.079837   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:39.079843   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:39.079890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:39.115532   79191 cri.go:89] found id: ""
	I0816 00:35:39.115559   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.115569   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:39.115576   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:39.115623   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:39.150197   79191 cri.go:89] found id: ""
	I0816 00:35:39.150222   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.150233   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:39.150241   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:39.150300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:39.186480   79191 cri.go:89] found id: ""
	I0816 00:35:39.186507   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.186515   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:39.186521   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:39.186572   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:39.221576   79191 cri.go:89] found id: ""
	I0816 00:35:39.221605   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.221615   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:39.221620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:39.221669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:39.259846   79191 cri.go:89] found id: ""
	I0816 00:35:39.259877   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.259888   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:39.259896   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:39.259950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:39.294866   79191 cri.go:89] found id: ""
	I0816 00:35:39.294891   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.294898   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:39.294903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:39.294952   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:39.329546   79191 cri.go:89] found id: ""
	I0816 00:35:39.329576   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.329584   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:39.329593   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:39.329604   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:39.371579   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:39.371609   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:39.422903   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:39.422935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:39.437673   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:39.437699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:39.515146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:39.515171   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:39.515185   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:38.495377   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.495856   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.926444   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:43.426438   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.376648   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.877521   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.101733   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:42.115563   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:42.115640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:42.155187   79191 cri.go:89] found id: ""
	I0816 00:35:42.155216   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.155224   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:42.155230   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:42.155282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:42.194414   79191 cri.go:89] found id: ""
	I0816 00:35:42.194444   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.194456   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:42.194464   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:42.194523   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:42.234219   79191 cri.go:89] found id: ""
	I0816 00:35:42.234245   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.234253   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:42.234259   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:42.234314   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:42.272278   79191 cri.go:89] found id: ""
	I0816 00:35:42.272304   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.272314   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:42.272322   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:42.272381   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:42.309973   79191 cri.go:89] found id: ""
	I0816 00:35:42.309999   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.310007   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:42.310013   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:42.310066   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:42.350745   79191 cri.go:89] found id: ""
	I0816 00:35:42.350773   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.350782   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:42.350790   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:42.350853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:42.387775   79191 cri.go:89] found id: ""
	I0816 00:35:42.387803   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.387813   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:42.387832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:42.387902   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:42.425086   79191 cri.go:89] found id: ""
	I0816 00:35:42.425110   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.425118   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:42.425125   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:42.425138   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:42.515543   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:42.515575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:42.558348   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:42.558372   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:42.613026   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:42.613059   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.628907   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:42.628932   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:42.710265   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.211083   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:45.225001   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:45.225083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:45.258193   79191 cri.go:89] found id: ""
	I0816 00:35:45.258223   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.258232   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:45.258240   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:45.258297   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:45.294255   79191 cri.go:89] found id: ""
	I0816 00:35:45.294278   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.294286   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:45.294291   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:45.294335   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:45.329827   79191 cri.go:89] found id: ""
	I0816 00:35:45.329875   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.329886   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:45.329894   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:45.329944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:45.366095   79191 cri.go:89] found id: ""
	I0816 00:35:45.366124   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.366134   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:45.366141   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:45.366202   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:45.402367   79191 cri.go:89] found id: ""
	I0816 00:35:45.402390   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.402398   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:45.402403   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:45.402449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:45.439272   79191 cri.go:89] found id: ""
	I0816 00:35:45.439293   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.439300   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:45.439310   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:45.439358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:45.474351   79191 cri.go:89] found id: ""
	I0816 00:35:45.474380   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.474388   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:45.474393   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:45.474445   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:45.519636   79191 cri.go:89] found id: ""
	I0816 00:35:45.519661   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.519671   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:45.519680   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:45.519695   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:45.593425   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.593446   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:45.593458   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:45.668058   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:45.668095   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:45.716090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:45.716125   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:45.774177   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:45.774207   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.495914   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:44.996641   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.426740   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.925719   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.376025   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.376628   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.876035   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:48.288893   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:48.302256   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:48.302321   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:48.337001   79191 cri.go:89] found id: ""
	I0816 00:35:48.337030   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.337041   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:48.337048   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:48.337110   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:48.378341   79191 cri.go:89] found id: ""
	I0816 00:35:48.378367   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.378375   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:48.378384   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:48.378447   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:48.414304   79191 cri.go:89] found id: ""
	I0816 00:35:48.414383   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.414402   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:48.414410   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:48.414473   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:48.453946   79191 cri.go:89] found id: ""
	I0816 00:35:48.453969   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.453976   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:48.453982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:48.454036   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:48.489597   79191 cri.go:89] found id: ""
	I0816 00:35:48.489617   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.489623   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:48.489629   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:48.489672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:48.524195   79191 cri.go:89] found id: ""
	I0816 00:35:48.524222   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.524232   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:48.524239   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:48.524293   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:48.567854   79191 cri.go:89] found id: ""
	I0816 00:35:48.567880   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.567890   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:48.567897   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:48.567956   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:48.603494   79191 cri.go:89] found id: ""
	I0816 00:35:48.603520   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.603530   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:48.603540   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:48.603556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:48.642927   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:48.642960   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:48.693761   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:48.693791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:48.708790   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:48.708818   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:48.780072   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:48.780092   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:48.780106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.362108   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:51.376113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:51.376185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:51.413988   79191 cri.go:89] found id: ""
	I0816 00:35:51.414022   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.414033   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:51.414041   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:51.414101   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:51.460901   79191 cri.go:89] found id: ""
	I0816 00:35:51.460937   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.460948   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:51.460956   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:51.461019   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:51.497178   79191 cri.go:89] found id: ""
	I0816 00:35:51.497205   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.497215   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:51.497223   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:51.497365   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:51.534559   79191 cri.go:89] found id: ""
	I0816 00:35:51.534589   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.534600   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:51.534607   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:51.534668   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:51.570258   79191 cri.go:89] found id: ""
	I0816 00:35:51.570280   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.570287   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:51.570293   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:51.570356   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:51.609639   79191 cri.go:89] found id: ""
	I0816 00:35:51.609665   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.609675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:51.609683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:51.609742   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:51.645629   79191 cri.go:89] found id: ""
	I0816 00:35:51.645652   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.645659   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:51.645664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:51.645731   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:51.683325   79191 cri.go:89] found id: ""
	I0816 00:35:51.683344   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.683351   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:51.683358   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:51.683369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:51.739101   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:51.739133   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:51.753436   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:51.753466   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:35:47.494904   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.495416   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.926975   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:51.928318   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:52.376854   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.880623   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:35:51.831242   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:51.831268   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:51.831294   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.926924   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:51.926970   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:54.472667   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:54.486706   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:54.486785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:54.524180   79191 cri.go:89] found id: ""
	I0816 00:35:54.524203   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.524211   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:54.524216   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:54.524273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:54.563758   79191 cri.go:89] found id: ""
	I0816 00:35:54.563781   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.563788   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:54.563795   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:54.563859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:54.599442   79191 cri.go:89] found id: ""
	I0816 00:35:54.599471   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.599481   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:54.599488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:54.599553   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:54.633521   79191 cri.go:89] found id: ""
	I0816 00:35:54.633547   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.633558   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:54.633565   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:54.633628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:54.670036   79191 cri.go:89] found id: ""
	I0816 00:35:54.670064   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.670075   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:54.670083   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:54.670148   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:54.707565   79191 cri.go:89] found id: ""
	I0816 00:35:54.707587   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.707594   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:54.707600   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:54.707659   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:54.744500   79191 cri.go:89] found id: ""
	I0816 00:35:54.744530   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.744541   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:54.744548   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:54.744612   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:54.778964   79191 cri.go:89] found id: ""
	I0816 00:35:54.778988   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.778995   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:54.779007   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:54.779020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:54.831806   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:54.831838   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:54.845954   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:54.845979   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:54.921817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:54.921855   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:54.921871   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:55.006401   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:55.006439   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:51.996591   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.495673   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.427044   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:56.927184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.376333   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.548661   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:57.562489   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:57.562549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:57.597855   79191 cri.go:89] found id: ""
	I0816 00:35:57.597881   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.597891   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:57.597899   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:57.597961   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:57.634085   79191 cri.go:89] found id: ""
	I0816 00:35:57.634114   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.634126   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:57.634133   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:57.634193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:57.671748   79191 cri.go:89] found id: ""
	I0816 00:35:57.671779   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.671788   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:57.671795   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:57.671859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:57.708836   79191 cri.go:89] found id: ""
	I0816 00:35:57.708862   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.708870   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:57.708877   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:57.708940   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:57.744601   79191 cri.go:89] found id: ""
	I0816 00:35:57.744630   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.744639   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:57.744645   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:57.744706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:57.781888   79191 cri.go:89] found id: ""
	I0816 00:35:57.781919   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.781929   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:57.781937   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:57.781997   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:57.822612   79191 cri.go:89] found id: ""
	I0816 00:35:57.822634   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.822641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:57.822647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:57.822706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:57.873968   79191 cri.go:89] found id: ""
	I0816 00:35:57.873998   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.874008   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:57.874019   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:57.874037   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:57.896611   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:57.896643   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:57.995575   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:57.995597   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:57.995612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:58.077196   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:58.077230   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:58.116956   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:58.116985   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:00.664805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:00.678425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:00.678501   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:00.715522   79191 cri.go:89] found id: ""
	I0816 00:36:00.715548   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.715557   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:00.715562   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:00.715608   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:00.749892   79191 cri.go:89] found id: ""
	I0816 00:36:00.749920   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.749931   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:00.749938   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:00.750006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:00.787302   79191 cri.go:89] found id: ""
	I0816 00:36:00.787325   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.787332   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:00.787338   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:00.787392   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:00.821866   79191 cri.go:89] found id: ""
	I0816 00:36:00.821894   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.821906   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:00.821914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:00.821971   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:00.856346   79191 cri.go:89] found id: ""
	I0816 00:36:00.856369   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.856377   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:00.856382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:00.856431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:00.893569   79191 cri.go:89] found id: ""
	I0816 00:36:00.893596   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.893606   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:00.893614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:00.893677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:00.930342   79191 cri.go:89] found id: ""
	I0816 00:36:00.930367   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.930378   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:00.930386   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:00.930622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:00.966039   79191 cri.go:89] found id: ""
	I0816 00:36:00.966071   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.966085   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:00.966095   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:00.966109   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:01.045594   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:01.045631   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:01.089555   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:01.089586   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:01.141597   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:01.141633   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:01.156260   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:01.156286   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:01.230573   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:56.995077   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:58.995897   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.495116   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.426099   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.926011   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.927327   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.376842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.875993   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.730825   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:03.744766   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:03.744838   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:03.781095   79191 cri.go:89] found id: ""
	I0816 00:36:03.781124   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.781142   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:03.781150   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:03.781215   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:03.815637   79191 cri.go:89] found id: ""
	I0816 00:36:03.815669   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.815680   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:03.815687   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:03.815741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:03.850076   79191 cri.go:89] found id: ""
	I0816 00:36:03.850110   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.850122   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:03.850130   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:03.850185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:03.888840   79191 cri.go:89] found id: ""
	I0816 00:36:03.888863   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.888872   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:03.888879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:03.888941   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:03.928317   79191 cri.go:89] found id: ""
	I0816 00:36:03.928341   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.928350   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:03.928359   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:03.928413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:03.964709   79191 cri.go:89] found id: ""
	I0816 00:36:03.964741   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.964751   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:03.964760   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:03.964830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:03.999877   79191 cri.go:89] found id: ""
	I0816 00:36:03.999902   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.999912   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:03.999919   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:03.999981   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:04.036772   79191 cri.go:89] found id: ""
	I0816 00:36:04.036799   79191 logs.go:276] 0 containers: []
	W0816 00:36:04.036810   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:04.036820   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:04.036833   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:04.118843   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:04.118879   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:04.162491   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:04.162548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:04.215100   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:04.215134   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:04.229043   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:04.229069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:04.307480   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:03.495661   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.995711   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.426223   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:08.426470   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.876718   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:07.877431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.807640   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:06.821144   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:06.821203   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:06.857743   79191 cri.go:89] found id: ""
	I0816 00:36:06.857776   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.857786   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:06.857794   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:06.857872   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:06.895980   79191 cri.go:89] found id: ""
	I0816 00:36:06.896007   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.896018   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:06.896025   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:06.896090   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:06.935358   79191 cri.go:89] found id: ""
	I0816 00:36:06.935389   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.935399   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:06.935406   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:06.935461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:06.971533   79191 cri.go:89] found id: ""
	I0816 00:36:06.971561   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.971572   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:06.971580   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:06.971640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:07.007786   79191 cri.go:89] found id: ""
	I0816 00:36:07.007812   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.007823   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:07.007830   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:07.007890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:07.044060   79191 cri.go:89] found id: ""
	I0816 00:36:07.044092   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.044104   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:07.044112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:07.044185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:07.080058   79191 cri.go:89] found id: ""
	I0816 00:36:07.080085   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.080094   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:07.080101   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:07.080156   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:07.117749   79191 cri.go:89] found id: ""
	I0816 00:36:07.117773   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.117780   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:07.117787   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:07.117799   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:07.171418   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:07.171453   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:07.185520   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:07.185542   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:07.257817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:07.257872   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:07.257888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:07.339530   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:07.339576   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:09.882613   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:09.895873   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:09.895950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:09.936739   79191 cri.go:89] found id: ""
	I0816 00:36:09.936766   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.936774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:09.936780   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:09.936836   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:09.974145   79191 cri.go:89] found id: ""
	I0816 00:36:09.974168   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.974180   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:09.974186   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:09.974243   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:10.012166   79191 cri.go:89] found id: ""
	I0816 00:36:10.012196   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.012206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:10.012214   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:10.012265   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:10.051080   79191 cri.go:89] found id: ""
	I0816 00:36:10.051103   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.051111   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:10.051117   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:10.051176   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:10.088519   79191 cri.go:89] found id: ""
	I0816 00:36:10.088548   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.088559   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:10.088567   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:10.088628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:10.123718   79191 cri.go:89] found id: ""
	I0816 00:36:10.123744   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.123752   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:10.123758   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:10.123805   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:10.161900   79191 cri.go:89] found id: ""
	I0816 00:36:10.161922   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.161929   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:10.161995   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:10.162064   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:10.196380   79191 cri.go:89] found id: ""
	I0816 00:36:10.196408   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.196419   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:10.196429   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:10.196443   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:10.248276   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:10.248309   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:10.262241   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:10.262269   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:10.340562   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:10.340598   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:10.340626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:10.417547   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:10.417578   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:07.996930   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:09.997666   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.426502   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.426976   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.377172   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.877236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.962310   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:12.976278   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:12.976338   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:13.014501   79191 cri.go:89] found id: ""
	I0816 00:36:13.014523   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.014530   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:13.014536   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:13.014587   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:13.055942   79191 cri.go:89] found id: ""
	I0816 00:36:13.055970   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.055979   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:13.055987   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:13.056048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:13.090309   79191 cri.go:89] found id: ""
	I0816 00:36:13.090336   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.090346   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:13.090354   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:13.090413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:13.124839   79191 cri.go:89] found id: ""
	I0816 00:36:13.124865   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.124876   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:13.124884   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:13.124945   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:13.164535   79191 cri.go:89] found id: ""
	I0816 00:36:13.164560   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.164567   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:13.164573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:13.164630   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:13.198651   79191 cri.go:89] found id: ""
	I0816 00:36:13.198699   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.198710   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:13.198718   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:13.198785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:13.233255   79191 cri.go:89] found id: ""
	I0816 00:36:13.233278   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.233286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:13.233292   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:13.233348   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:13.267327   79191 cri.go:89] found id: ""
	I0816 00:36:13.267351   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.267359   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:13.267367   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:13.267384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:13.352053   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:13.352089   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:13.393438   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:13.393471   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:13.445397   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:13.445430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:13.459143   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:13.459177   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:13.530160   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.031296   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:16.045557   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:16.045618   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:16.081828   79191 cri.go:89] found id: ""
	I0816 00:36:16.081871   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.081882   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:16.081890   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:16.081949   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:16.116228   79191 cri.go:89] found id: ""
	I0816 00:36:16.116254   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.116264   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:16.116272   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:16.116334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:16.150051   79191 cri.go:89] found id: ""
	I0816 00:36:16.150079   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.150087   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:16.150093   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:16.150139   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:16.186218   79191 cri.go:89] found id: ""
	I0816 00:36:16.186241   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.186248   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:16.186254   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:16.186301   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:16.223223   79191 cri.go:89] found id: ""
	I0816 00:36:16.223255   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.223263   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:16.223270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:16.223316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:16.259929   79191 cri.go:89] found id: ""
	I0816 00:36:16.259953   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.259960   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:16.259970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:16.260099   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:16.294611   79191 cri.go:89] found id: ""
	I0816 00:36:16.294633   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.294641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:16.294649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:16.294725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:16.333492   79191 cri.go:89] found id: ""
	I0816 00:36:16.333523   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.333533   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:16.333544   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:16.333563   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:16.385970   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:16.386002   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:16.400359   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:16.400384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:16.471363   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.471388   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:16.471408   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:16.555990   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:16.556022   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:12.495406   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.995145   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.926160   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.426768   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:15.376672   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.876395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.876542   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.099502   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:19.112649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:19.112706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:19.145809   79191 cri.go:89] found id: ""
	I0816 00:36:19.145837   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.145858   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:19.145865   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:19.145928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:19.183737   79191 cri.go:89] found id: ""
	I0816 00:36:19.183763   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.183774   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:19.183781   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:19.183841   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:19.219729   79191 cri.go:89] found id: ""
	I0816 00:36:19.219756   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.219764   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:19.219770   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:19.219815   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:19.254450   79191 cri.go:89] found id: ""
	I0816 00:36:19.254474   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.254481   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:19.254488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:19.254540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:19.289543   79191 cri.go:89] found id: ""
	I0816 00:36:19.289573   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.289585   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:19.289592   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:19.289651   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:19.330727   79191 cri.go:89] found id: ""
	I0816 00:36:19.330748   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.330756   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:19.330762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:19.330809   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:19.368952   79191 cri.go:89] found id: ""
	I0816 00:36:19.368978   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.368986   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:19.368992   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:19.369048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:19.406211   79191 cri.go:89] found id: ""
	I0816 00:36:19.406247   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.406258   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:19.406268   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:19.406282   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:19.457996   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:19.458032   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:19.472247   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:19.472274   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:19.542840   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:19.542862   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:19.542876   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:19.624478   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:19.624520   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:16.997148   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.496434   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.427251   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:21.925550   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.925858   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.376318   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:24.376431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.165884   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:22.180005   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:22.180078   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:22.217434   79191 cri.go:89] found id: ""
	I0816 00:36:22.217463   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.217471   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:22.217478   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:22.217534   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:22.250679   79191 cri.go:89] found id: ""
	I0816 00:36:22.250708   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.250717   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:22.250725   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:22.250785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:22.284294   79191 cri.go:89] found id: ""
	I0816 00:36:22.284324   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.284334   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:22.284341   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:22.284403   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:22.320747   79191 cri.go:89] found id: ""
	I0816 00:36:22.320779   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.320790   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:22.320799   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:22.320858   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:22.355763   79191 cri.go:89] found id: ""
	I0816 00:36:22.355793   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.355803   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:22.355811   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:22.355871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:22.392762   79191 cri.go:89] found id: ""
	I0816 00:36:22.392788   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.392796   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:22.392802   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:22.392860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:22.426577   79191 cri.go:89] found id: ""
	I0816 00:36:22.426605   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.426614   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:22.426621   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:22.426682   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:22.459989   79191 cri.go:89] found id: ""
	I0816 00:36:22.460018   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.460030   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:22.460040   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:22.460054   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:22.545782   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:22.545820   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:22.587404   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:22.587431   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:22.638519   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:22.638559   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:22.653064   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:22.653087   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:22.734333   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.234823   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:25.248716   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:25.248787   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:25.284760   79191 cri.go:89] found id: ""
	I0816 00:36:25.284786   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.284793   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:25.284799   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:25.284870   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:25.325523   79191 cri.go:89] found id: ""
	I0816 00:36:25.325548   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.325556   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:25.325562   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:25.325621   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:25.365050   79191 cri.go:89] found id: ""
	I0816 00:36:25.365078   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.365088   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:25.365096   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:25.365155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:25.405005   79191 cri.go:89] found id: ""
	I0816 00:36:25.405038   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.405049   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:25.405062   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:25.405121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:25.444622   79191 cri.go:89] found id: ""
	I0816 00:36:25.444648   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.444656   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:25.444662   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:25.444710   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:25.485364   79191 cri.go:89] found id: ""
	I0816 00:36:25.485394   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.485404   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:25.485413   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:25.485492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:25.521444   79191 cri.go:89] found id: ""
	I0816 00:36:25.521471   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.521482   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:25.521490   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:25.521550   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:25.556763   79191 cri.go:89] found id: ""
	I0816 00:36:25.556789   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.556796   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:25.556805   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:25.556817   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:25.606725   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:25.606759   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:25.623080   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:25.623108   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:25.705238   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.705258   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:25.705280   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:25.782188   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:25.782224   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:21.994519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.995061   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.494442   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:25.926835   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.427012   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.876206   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.876563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.325018   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:28.337778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:28.337860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:28.378452   79191 cri.go:89] found id: ""
	I0816 00:36:28.378482   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.378492   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:28.378499   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:28.378556   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:28.412103   79191 cri.go:89] found id: ""
	I0816 00:36:28.412132   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.412143   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:28.412150   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:28.412214   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:28.447363   79191 cri.go:89] found id: ""
	I0816 00:36:28.447388   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.447396   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:28.447401   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:28.447452   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:28.481199   79191 cri.go:89] found id: ""
	I0816 00:36:28.481228   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.481242   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:28.481251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:28.481305   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:28.517523   79191 cri.go:89] found id: ""
	I0816 00:36:28.517545   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.517552   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:28.517558   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:28.517620   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:28.552069   79191 cri.go:89] found id: ""
	I0816 00:36:28.552101   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.552112   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:28.552120   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:28.552193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:28.594124   79191 cri.go:89] found id: ""
	I0816 00:36:28.594148   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.594158   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:28.594166   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:28.594228   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:28.631451   79191 cri.go:89] found id: ""
	I0816 00:36:28.631472   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.631480   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:28.631488   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:28.631498   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:28.685335   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:28.685368   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:28.700852   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:28.700877   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:28.773932   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:28.773957   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:28.773972   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:28.848951   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:28.848989   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:31.389208   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:31.403731   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:31.403813   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:31.440979   79191 cri.go:89] found id: ""
	I0816 00:36:31.441010   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.441020   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:31.441028   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:31.441092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:31.476435   79191 cri.go:89] found id: ""
	I0816 00:36:31.476458   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.476465   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:31.476471   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:31.476530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:31.514622   79191 cri.go:89] found id: ""
	I0816 00:36:31.514644   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.514651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:31.514657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:31.514715   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:31.554503   79191 cri.go:89] found id: ""
	I0816 00:36:31.554533   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.554543   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:31.554551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:31.554609   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:31.590283   79191 cri.go:89] found id: ""
	I0816 00:36:31.590317   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.590325   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:31.590332   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:31.590380   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:31.625969   79191 cri.go:89] found id: ""
	I0816 00:36:31.626003   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.626014   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:31.626031   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:31.626102   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:31.660489   79191 cri.go:89] found id: ""
	I0816 00:36:31.660513   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.660520   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:31.660526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:31.660583   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:31.694728   79191 cri.go:89] found id: ""
	I0816 00:36:31.694761   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.694769   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:31.694779   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:31.694790   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:31.760631   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:31.760663   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:31.774858   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:31.774886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:36:28.994228   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.994276   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.926313   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.426045   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.877175   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.378602   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:36:31.851125   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:31.851145   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:31.851156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:31.934491   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:31.934521   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:34.476368   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:34.489252   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:34.489308   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:34.524932   79191 cri.go:89] found id: ""
	I0816 00:36:34.524964   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.524972   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:34.524977   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:34.525032   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:34.559434   79191 cri.go:89] found id: ""
	I0816 00:36:34.559462   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.559473   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:34.559481   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:34.559543   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:34.598700   79191 cri.go:89] found id: ""
	I0816 00:36:34.598728   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.598739   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:34.598747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:34.598808   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:34.632413   79191 cri.go:89] found id: ""
	I0816 00:36:34.632438   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.632448   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:34.632456   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:34.632514   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:34.668385   79191 cri.go:89] found id: ""
	I0816 00:36:34.668409   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.668418   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:34.668425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:34.668486   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:34.703728   79191 cri.go:89] found id: ""
	I0816 00:36:34.703754   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.703764   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:34.703772   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:34.703832   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:34.743119   79191 cri.go:89] found id: ""
	I0816 00:36:34.743152   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.743161   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:34.743171   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:34.743230   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:34.778932   79191 cri.go:89] found id: ""
	I0816 00:36:34.778955   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.778963   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:34.778971   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:34.778987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:34.832050   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:34.832084   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:34.845700   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:34.845728   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:34.917535   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:34.917554   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:34.917565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:35.005262   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:35.005295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:32.994435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:34.994503   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.926422   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.876400   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:38.376351   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.547107   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:37.562035   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:37.562095   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:37.605992   79191 cri.go:89] found id: ""
	I0816 00:36:37.606021   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.606028   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:37.606035   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:37.606092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:37.642613   79191 cri.go:89] found id: ""
	I0816 00:36:37.642642   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.642653   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:37.642660   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:37.642708   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:37.677810   79191 cri.go:89] found id: ""
	I0816 00:36:37.677863   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.677875   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:37.677883   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:37.677939   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:37.714490   79191 cri.go:89] found id: ""
	I0816 00:36:37.714514   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.714522   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:37.714529   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:37.714575   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:37.750807   79191 cri.go:89] found id: ""
	I0816 00:36:37.750837   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.750844   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:37.750850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:37.750912   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:37.790307   79191 cri.go:89] found id: ""
	I0816 00:36:37.790337   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.790347   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:37.790355   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:37.790404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:37.826811   79191 cri.go:89] found id: ""
	I0816 00:36:37.826838   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.826848   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:37.826856   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:37.826920   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:37.862066   79191 cri.go:89] found id: ""
	I0816 00:36:37.862091   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.862101   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:37.862112   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:37.862127   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:37.917127   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:37.917161   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:37.932986   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:37.933024   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:38.008715   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:38.008739   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:38.008754   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:38.088744   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:38.088778   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:40.643426   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:40.659064   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:40.659128   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:40.702486   79191 cri.go:89] found id: ""
	I0816 00:36:40.702513   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.702523   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:40.702530   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:40.702595   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:40.736016   79191 cri.go:89] found id: ""
	I0816 00:36:40.736044   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.736057   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:40.736064   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:40.736125   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:40.779665   79191 cri.go:89] found id: ""
	I0816 00:36:40.779704   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.779724   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:40.779733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:40.779795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:40.818612   79191 cri.go:89] found id: ""
	I0816 00:36:40.818633   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.818640   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:40.818647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:40.818695   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:40.855990   79191 cri.go:89] found id: ""
	I0816 00:36:40.856014   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.856021   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:40.856027   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:40.856074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:40.894792   79191 cri.go:89] found id: ""
	I0816 00:36:40.894827   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.894836   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:40.894845   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:40.894894   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:40.932233   79191 cri.go:89] found id: ""
	I0816 00:36:40.932255   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.932263   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:40.932268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:40.932324   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:40.974601   79191 cri.go:89] found id: ""
	I0816 00:36:40.974624   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.974633   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:40.974642   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:40.974660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:41.049185   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:41.049209   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:41.049223   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:41.129446   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:41.129481   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:41.170312   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:41.170341   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:41.226217   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:41.226254   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:36.995268   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:39.494273   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:41.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.426501   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.926122   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.877227   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.878644   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:43.741485   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:43.756248   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:43.756325   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:43.792440   79191 cri.go:89] found id: ""
	I0816 00:36:43.792469   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.792480   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:43.792488   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:43.792549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:43.829906   79191 cri.go:89] found id: ""
	I0816 00:36:43.829933   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.829941   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:43.829947   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:43.830003   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:43.880305   79191 cri.go:89] found id: ""
	I0816 00:36:43.880330   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.880337   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:43.880343   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:43.880399   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:43.937899   79191 cri.go:89] found id: ""
	I0816 00:36:43.937929   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.937939   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:43.937953   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:43.938023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:43.997578   79191 cri.go:89] found id: ""
	I0816 00:36:43.997603   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.997610   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:43.997620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:43.997672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:44.035606   79191 cri.go:89] found id: ""
	I0816 00:36:44.035629   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.035637   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:44.035643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:44.035692   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:44.072919   79191 cri.go:89] found id: ""
	I0816 00:36:44.072950   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.072961   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:44.072968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:44.073043   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:44.108629   79191 cri.go:89] found id: ""
	I0816 00:36:44.108659   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.108681   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:44.108692   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:44.108705   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:44.149127   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:44.149151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:44.201694   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:44.201737   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:44.217161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:44.217199   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:44.284335   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:44.284362   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:44.284379   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:43.996478   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.494382   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:44.926542   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.926713   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:45.376030   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:47.875418   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.877201   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.869196   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:46.883519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:46.883584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:46.924767   79191 cri.go:89] found id: ""
	I0816 00:36:46.924806   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.924821   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:46.924829   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:46.924889   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:46.963282   79191 cri.go:89] found id: ""
	I0816 00:36:46.963309   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.963320   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:46.963327   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:46.963389   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:47.001421   79191 cri.go:89] found id: ""
	I0816 00:36:47.001450   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.001458   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:47.001463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:47.001518   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:47.037679   79191 cri.go:89] found id: ""
	I0816 00:36:47.037702   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.037713   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:47.037720   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:47.037778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:47.078009   79191 cri.go:89] found id: ""
	I0816 00:36:47.078039   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.078050   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:47.078056   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:47.078113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:47.119032   79191 cri.go:89] found id: ""
	I0816 00:36:47.119056   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.119064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:47.119069   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:47.119127   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:47.154893   79191 cri.go:89] found id: ""
	I0816 00:36:47.154919   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.154925   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:47.154933   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:47.154993   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:47.194544   79191 cri.go:89] found id: ""
	I0816 00:36:47.194571   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.194582   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:47.194592   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:47.194612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:47.267148   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:47.267172   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:47.267186   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:47.345257   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:47.345295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:47.386207   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:47.386233   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:47.436171   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:47.436201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:49.949977   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:49.965702   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:49.965761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:50.002443   79191 cri.go:89] found id: ""
	I0816 00:36:50.002470   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.002481   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:50.002489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:50.002548   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:50.039123   79191 cri.go:89] found id: ""
	I0816 00:36:50.039155   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.039162   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:50.039168   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:50.039220   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:50.074487   79191 cri.go:89] found id: ""
	I0816 00:36:50.074517   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.074527   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:50.074535   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:50.074593   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:50.108980   79191 cri.go:89] found id: ""
	I0816 00:36:50.109008   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.109018   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:50.109025   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:50.109082   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:50.149182   79191 cri.go:89] found id: ""
	I0816 00:36:50.149202   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.149209   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:50.149215   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:50.149261   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:50.183066   79191 cri.go:89] found id: ""
	I0816 00:36:50.183094   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.183102   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:50.183108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:50.183165   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:50.220200   79191 cri.go:89] found id: ""
	I0816 00:36:50.220231   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.220240   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:50.220246   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:50.220302   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:50.258059   79191 cri.go:89] found id: ""
	I0816 00:36:50.258083   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.258092   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:50.258100   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:50.258110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:50.300560   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:50.300591   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:50.350548   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:50.350581   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:50.364792   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:50.364816   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:50.437723   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:50.437746   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:50.437761   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:48.995009   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:50.995542   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.425926   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:51.427896   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.926363   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:52.375826   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:54.876435   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.015846   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:53.029184   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:53.029246   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:53.064306   79191 cri.go:89] found id: ""
	I0816 00:36:53.064338   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.064346   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:53.064352   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:53.064404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:53.104425   79191 cri.go:89] found id: ""
	I0816 00:36:53.104458   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.104468   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:53.104476   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:53.104538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:53.139470   79191 cri.go:89] found id: ""
	I0816 00:36:53.139493   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.139500   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:53.139506   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:53.139551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:53.185195   79191 cri.go:89] found id: ""
	I0816 00:36:53.185225   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.185234   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:53.185242   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:53.185300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:53.221897   79191 cri.go:89] found id: ""
	I0816 00:36:53.221925   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.221935   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:53.221943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:53.222006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:53.258810   79191 cri.go:89] found id: ""
	I0816 00:36:53.258841   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.258852   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:53.258859   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:53.258924   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:53.298672   79191 cri.go:89] found id: ""
	I0816 00:36:53.298701   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.298711   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:53.298719   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:53.298778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:53.333498   79191 cri.go:89] found id: ""
	I0816 00:36:53.333520   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.333527   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:53.333535   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:53.333548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.370495   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:53.370530   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:53.423938   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:53.423982   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:53.438897   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:53.438926   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:53.505951   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:53.505973   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:53.505987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.089638   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:56.103832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:56.103893   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:56.148010   79191 cri.go:89] found id: ""
	I0816 00:36:56.148038   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.148048   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:56.148057   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:56.148120   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:56.185631   79191 cri.go:89] found id: ""
	I0816 00:36:56.185663   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.185673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:56.185680   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:56.185739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:56.222064   79191 cri.go:89] found id: ""
	I0816 00:36:56.222093   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.222104   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:56.222112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:56.222162   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:56.260462   79191 cri.go:89] found id: ""
	I0816 00:36:56.260494   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.260504   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:56.260513   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:56.260574   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:56.296125   79191 cri.go:89] found id: ""
	I0816 00:36:56.296154   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.296164   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:56.296172   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:56.296236   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:56.333278   79191 cri.go:89] found id: ""
	I0816 00:36:56.333305   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.333316   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:56.333324   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:56.333385   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:56.368924   79191 cri.go:89] found id: ""
	I0816 00:36:56.368952   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.368962   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:56.368970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:56.369034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:56.407148   79191 cri.go:89] found id: ""
	I0816 00:36:56.407180   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.407190   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:56.407201   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:56.407215   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:56.464745   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:56.464779   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:56.478177   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:56.478204   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:56.555827   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:56.555851   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:56.555864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.640001   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:56.640040   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.495546   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.994786   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:58.426865   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:57.376484   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.876765   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.181423   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:59.195722   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:59.195804   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:59.232043   79191 cri.go:89] found id: ""
	I0816 00:36:59.232067   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.232075   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:59.232081   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:59.232132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:59.270628   79191 cri.go:89] found id: ""
	I0816 00:36:59.270656   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.270673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:59.270681   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:59.270743   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:59.304054   79191 cri.go:89] found id: ""
	I0816 00:36:59.304089   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.304100   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:59.304108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:59.304169   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:59.339386   79191 cri.go:89] found id: ""
	I0816 00:36:59.339410   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.339417   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:59.339423   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:59.339483   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:59.381313   79191 cri.go:89] found id: ""
	I0816 00:36:59.381361   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.381376   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:59.381385   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:59.381449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:59.417060   79191 cri.go:89] found id: ""
	I0816 00:36:59.417090   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.417101   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:59.417109   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:59.417160   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:59.461034   79191 cri.go:89] found id: ""
	I0816 00:36:59.461060   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.461071   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:59.461078   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:59.461136   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:59.496248   79191 cri.go:89] found id: ""
	I0816 00:36:59.496276   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.496286   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:59.496297   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:59.496312   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:59.566779   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:59.566803   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:59.566829   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:59.651999   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:59.652034   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:59.693286   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:59.693310   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:59.746677   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:59.746711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:58.494370   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.494959   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.927036   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:03.425008   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.376921   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.876676   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.262527   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:02.277903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:02.277965   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:02.323846   79191 cri.go:89] found id: ""
	I0816 00:37:02.323868   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.323876   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:02.323882   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:02.323938   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:02.359552   79191 cri.go:89] found id: ""
	I0816 00:37:02.359578   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.359589   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:02.359596   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:02.359657   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:02.395062   79191 cri.go:89] found id: ""
	I0816 00:37:02.395087   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.395094   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:02.395100   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:02.395155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:02.432612   79191 cri.go:89] found id: ""
	I0816 00:37:02.432636   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.432646   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:02.432654   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:02.432712   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:02.468612   79191 cri.go:89] found id: ""
	I0816 00:37:02.468640   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.468651   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:02.468659   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:02.468716   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:02.514472   79191 cri.go:89] found id: ""
	I0816 00:37:02.514500   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.514511   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:02.514519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:02.514576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:02.551964   79191 cri.go:89] found id: ""
	I0816 00:37:02.551993   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.552003   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:02.552011   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:02.552061   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:02.588018   79191 cri.go:89] found id: ""
	I0816 00:37:02.588044   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.588053   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:02.588063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:02.588081   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:02.638836   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:02.638875   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:02.653581   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:02.653613   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:02.737018   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.737047   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:02.737065   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:02.819726   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:02.819763   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.364943   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:05.379433   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:05.379492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:05.419165   79191 cri.go:89] found id: ""
	I0816 00:37:05.419191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.419198   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:05.419204   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:05.419264   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:05.454417   79191 cri.go:89] found id: ""
	I0816 00:37:05.454438   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.454446   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:05.454452   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:05.454497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:05.490162   79191 cri.go:89] found id: ""
	I0816 00:37:05.490191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.490203   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:05.490210   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:05.490268   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:05.527303   79191 cri.go:89] found id: ""
	I0816 00:37:05.527327   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.527334   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:05.527340   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:05.527393   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:05.562271   79191 cri.go:89] found id: ""
	I0816 00:37:05.562302   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.562310   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:05.562316   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:05.562374   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:05.597800   79191 cri.go:89] found id: ""
	I0816 00:37:05.597823   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.597830   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:05.597837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:05.597905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:05.633996   79191 cri.go:89] found id: ""
	I0816 00:37:05.634021   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.634028   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:05.634034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:05.634088   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:05.672408   79191 cri.go:89] found id: ""
	I0816 00:37:05.672437   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.672446   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:05.672457   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:05.672472   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:05.750956   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:05.750995   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.795573   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:05.795603   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:05.848560   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:05.848593   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:05.862245   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:05.862268   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:05.938704   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.495728   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.994839   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:05.425507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:07.426459   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:06.877664   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.375601   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:08.439692   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:08.452850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:08.452927   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:08.490015   79191 cri.go:89] found id: ""
	I0816 00:37:08.490043   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.490053   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:08.490060   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:08.490121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:08.529631   79191 cri.go:89] found id: ""
	I0816 00:37:08.529665   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.529676   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:08.529689   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:08.529747   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:08.564858   79191 cri.go:89] found id: ""
	I0816 00:37:08.564885   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.564896   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:08.564904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:08.564966   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:08.601144   79191 cri.go:89] found id: ""
	I0816 00:37:08.601180   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.601190   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:08.601200   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:08.601257   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:08.637050   79191 cri.go:89] found id: ""
	I0816 00:37:08.637081   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.637090   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:08.637098   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:08.637158   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:08.670613   79191 cri.go:89] found id: ""
	I0816 00:37:08.670644   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.670655   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:08.670663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:08.670727   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:08.704664   79191 cri.go:89] found id: ""
	I0816 00:37:08.704690   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.704698   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:08.704704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:08.704754   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:08.741307   79191 cri.go:89] found id: ""
	I0816 00:37:08.741337   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.741348   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:08.741360   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:08.741374   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:08.755434   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:08.755459   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:08.828118   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:08.828140   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:08.828151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:08.911565   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:08.911605   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:08.954907   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:08.954937   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.508848   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:11.521998   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:11.522060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:11.558581   79191 cri.go:89] found id: ""
	I0816 00:37:11.558611   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.558622   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:11.558630   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:11.558697   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:11.593798   79191 cri.go:89] found id: ""
	I0816 00:37:11.593822   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.593830   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:11.593836   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:11.593905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:11.629619   79191 cri.go:89] found id: ""
	I0816 00:37:11.629648   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.629658   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:11.629664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:11.629717   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:11.666521   79191 cri.go:89] found id: ""
	I0816 00:37:11.666548   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.666556   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:11.666562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:11.666607   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:11.703374   79191 cri.go:89] found id: ""
	I0816 00:37:11.703406   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.703417   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:11.703427   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:11.703491   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:11.739374   79191 cri.go:89] found id: ""
	I0816 00:37:11.739403   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.739413   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:11.739420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:11.739475   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:11.774981   79191 cri.go:89] found id: ""
	I0816 00:37:11.775006   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.775013   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:11.775019   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:11.775074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:06.995675   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.495024   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:12.428179   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.377241   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.875723   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.809561   79191 cri.go:89] found id: ""
	I0816 00:37:11.809590   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.809601   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:11.809612   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:11.809626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.863071   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:11.863116   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:11.878161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:11.878191   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:11.953572   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.953594   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:11.953608   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:12.035815   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:12.035848   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:14.576547   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:14.590747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:14.590802   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:14.626732   79191 cri.go:89] found id: ""
	I0816 00:37:14.626762   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.626774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:14.626781   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:14.626833   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:14.662954   79191 cri.go:89] found id: ""
	I0816 00:37:14.662978   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.662988   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:14.662996   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:14.663057   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:14.697618   79191 cri.go:89] found id: ""
	I0816 00:37:14.697646   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.697656   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:14.697663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:14.697725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:14.735137   79191 cri.go:89] found id: ""
	I0816 00:37:14.735161   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.735168   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:14.735174   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:14.735222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:14.770625   79191 cri.go:89] found id: ""
	I0816 00:37:14.770648   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.770655   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:14.770660   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:14.770718   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:14.808678   79191 cri.go:89] found id: ""
	I0816 00:37:14.808708   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.808718   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:14.808726   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:14.808795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:14.847321   79191 cri.go:89] found id: ""
	I0816 00:37:14.847349   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.847360   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:14.847368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:14.847425   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:14.886110   79191 cri.go:89] found id: ""
	I0816 00:37:14.886136   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.886147   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:14.886156   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:14.886175   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:14.971978   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:14.972013   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:15.015620   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:15.015644   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:15.067372   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:15.067405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:15.081629   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:15.081652   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:15.151580   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.995551   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.995831   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.495016   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:14.926297   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.926367   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:18.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:15.876514   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.877987   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.652362   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:17.666201   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:17.666278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:17.698723   79191 cri.go:89] found id: ""
	I0816 00:37:17.698760   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.698772   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:17.698778   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:17.698827   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:17.732854   79191 cri.go:89] found id: ""
	I0816 00:37:17.732883   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.732893   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:17.732901   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:17.732957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:17.767665   79191 cri.go:89] found id: ""
	I0816 00:37:17.767691   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.767701   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:17.767709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:17.767769   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:17.801490   79191 cri.go:89] found id: ""
	I0816 00:37:17.801512   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.801520   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:17.801526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:17.801579   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:17.837451   79191 cri.go:89] found id: ""
	I0816 00:37:17.837479   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.837490   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:17.837498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:17.837562   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:17.872898   79191 cri.go:89] found id: ""
	I0816 00:37:17.872924   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.872934   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:17.872943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:17.873002   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:17.910325   79191 cri.go:89] found id: ""
	I0816 00:37:17.910352   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.910362   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:17.910370   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:17.910431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:17.946885   79191 cri.go:89] found id: ""
	I0816 00:37:17.946909   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.946916   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:17.946923   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:17.946935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:18.014011   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:18.014045   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:18.028850   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:18.028886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:18.099362   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:18.099381   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:18.099396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:18.180552   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:18.180588   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:20.720810   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:20.733806   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:20.733887   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:20.771300   79191 cri.go:89] found id: ""
	I0816 00:37:20.771323   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.771330   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:20.771336   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:20.771394   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:20.812327   79191 cri.go:89] found id: ""
	I0816 00:37:20.812355   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.812362   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:20.812369   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:20.812430   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:20.846830   79191 cri.go:89] found id: ""
	I0816 00:37:20.846861   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.846872   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:20.846879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:20.846948   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:20.889979   79191 cri.go:89] found id: ""
	I0816 00:37:20.890005   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.890015   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:20.890023   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:20.890086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:20.933732   79191 cri.go:89] found id: ""
	I0816 00:37:20.933762   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.933772   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:20.933778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:20.933824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:20.972341   79191 cri.go:89] found id: ""
	I0816 00:37:20.972368   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.972376   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:20.972382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:20.972444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:21.011179   79191 cri.go:89] found id: ""
	I0816 00:37:21.011207   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.011216   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:21.011224   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:21.011282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:21.045645   79191 cri.go:89] found id: ""
	I0816 00:37:21.045668   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.045675   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:21.045684   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:21.045694   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:21.099289   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:21.099321   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:21.113814   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:21.113858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:21.186314   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:21.186337   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:21.186355   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:21.271116   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:21.271152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:18.994476   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.996435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:21.425187   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.425456   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.377999   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:22.877014   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.818598   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:23.832330   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:23.832387   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:23.869258   79191 cri.go:89] found id: ""
	I0816 00:37:23.869279   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.869286   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:23.869293   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:23.869342   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:23.903958   79191 cri.go:89] found id: ""
	I0816 00:37:23.903989   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.903999   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:23.904006   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:23.904060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:23.943110   79191 cri.go:89] found id: ""
	I0816 00:37:23.943142   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.943153   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:23.943160   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:23.943222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:23.979325   79191 cri.go:89] found id: ""
	I0816 00:37:23.979356   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.979366   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:23.979374   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:23.979435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:24.017570   79191 cri.go:89] found id: ""
	I0816 00:37:24.017597   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.017607   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:24.017614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:24.017684   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:24.051522   79191 cri.go:89] found id: ""
	I0816 00:37:24.051546   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.051555   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:24.051562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:24.051626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:24.087536   79191 cri.go:89] found id: ""
	I0816 00:37:24.087561   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.087572   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:24.087579   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:24.087644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:24.123203   79191 cri.go:89] found id: ""
	I0816 00:37:24.123233   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.123245   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:24.123256   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:24.123276   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:24.178185   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:24.178225   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:24.192895   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:24.192920   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:24.273471   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:24.273492   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:24.273504   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:24.357890   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:24.357936   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:23.495269   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.994859   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.427328   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.927068   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.376932   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.377168   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:29.876182   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:26.950399   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:26.964347   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:26.964406   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:27.004694   79191 cri.go:89] found id: ""
	I0816 00:37:27.004722   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.004738   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:27.004745   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:27.004800   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:27.040051   79191 cri.go:89] found id: ""
	I0816 00:37:27.040080   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.040090   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:27.040096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:27.040144   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:27.088614   79191 cri.go:89] found id: ""
	I0816 00:37:27.088642   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.088651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:27.088657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:27.088732   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:27.125427   79191 cri.go:89] found id: ""
	I0816 00:37:27.125450   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.125457   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:27.125464   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:27.125511   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:27.158562   79191 cri.go:89] found id: ""
	I0816 00:37:27.158592   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.158602   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:27.158609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:27.158672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:27.192986   79191 cri.go:89] found id: ""
	I0816 00:37:27.193015   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.193026   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:27.193034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:27.193091   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:27.228786   79191 cri.go:89] found id: ""
	I0816 00:37:27.228828   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.228847   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:27.228858   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:27.228921   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:27.262776   79191 cri.go:89] found id: ""
	I0816 00:37:27.262808   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.262819   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:27.262829   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:27.262844   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:27.276444   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:27.276470   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:27.349918   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:27.349946   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:27.349958   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:27.435030   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:27.435061   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:27.484043   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:27.484069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.038376   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:30.051467   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:30.051530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:30.086346   79191 cri.go:89] found id: ""
	I0816 00:37:30.086376   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.086386   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:30.086394   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:30.086454   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:30.127665   79191 cri.go:89] found id: ""
	I0816 00:37:30.127691   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.127699   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:30.127704   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:30.127757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:30.169901   79191 cri.go:89] found id: ""
	I0816 00:37:30.169929   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.169939   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:30.169950   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:30.170013   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:30.212501   79191 cri.go:89] found id: ""
	I0816 00:37:30.212523   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.212530   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:30.212537   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:30.212584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:30.256560   79191 cri.go:89] found id: ""
	I0816 00:37:30.256583   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.256591   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:30.256597   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:30.256646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:30.291062   79191 cri.go:89] found id: ""
	I0816 00:37:30.291086   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.291093   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:30.291099   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:30.291143   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:30.328325   79191 cri.go:89] found id: ""
	I0816 00:37:30.328353   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.328361   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:30.328368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:30.328415   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:30.364946   79191 cri.go:89] found id: ""
	I0816 00:37:30.364972   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.364981   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:30.364991   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:30.365005   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:30.408090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:30.408117   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.463421   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:30.463456   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:30.479679   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:30.479711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:30.555394   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:30.555416   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:30.555432   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:28.494477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.494598   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.427146   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:32.926282   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:31.877446   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.376145   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:33.137366   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:33.150970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:33.151030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:33.191020   79191 cri.go:89] found id: ""
	I0816 00:37:33.191047   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.191055   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:33.191061   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:33.191112   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:33.227971   79191 cri.go:89] found id: ""
	I0816 00:37:33.228022   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.228030   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:33.228038   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:33.228089   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:33.265036   79191 cri.go:89] found id: ""
	I0816 00:37:33.265065   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.265074   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:33.265079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:33.265126   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:33.300385   79191 cri.go:89] found id: ""
	I0816 00:37:33.300411   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.300418   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:33.300425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:33.300487   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:33.335727   79191 cri.go:89] found id: ""
	I0816 00:37:33.335757   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.335768   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:33.335776   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:33.335839   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:33.373458   79191 cri.go:89] found id: ""
	I0816 00:37:33.373489   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.373500   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:33.373507   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:33.373568   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:33.410380   79191 cri.go:89] found id: ""
	I0816 00:37:33.410404   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.410413   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:33.410420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:33.410480   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:33.451007   79191 cri.go:89] found id: ""
	I0816 00:37:33.451030   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.451040   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:33.451049   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:33.451062   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:33.502215   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:33.502249   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:33.516123   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:33.516152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:33.590898   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:33.590921   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:33.590944   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:33.668404   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:33.668455   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:36.209671   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:36.223498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:36.223561   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:36.258980   79191 cri.go:89] found id: ""
	I0816 00:37:36.259041   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.259056   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:36.259064   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:36.259123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:36.293659   79191 cri.go:89] found id: ""
	I0816 00:37:36.293687   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.293694   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:36.293703   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:36.293761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:36.331729   79191 cri.go:89] found id: ""
	I0816 00:37:36.331756   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.331766   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:36.331773   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:36.331830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:36.368441   79191 cri.go:89] found id: ""
	I0816 00:37:36.368470   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.368479   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:36.368486   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:36.368533   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:36.405338   79191 cri.go:89] found id: ""
	I0816 00:37:36.405368   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.405380   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:36.405389   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:36.405448   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:36.441986   79191 cri.go:89] found id: ""
	I0816 00:37:36.442018   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.442029   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:36.442038   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:36.442097   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:36.478102   79191 cri.go:89] found id: ""
	I0816 00:37:36.478183   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.478197   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:36.478206   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:36.478269   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:36.517138   79191 cri.go:89] found id: ""
	I0816 00:37:36.517167   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.517178   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:36.517190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:36.517205   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:36.570009   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:36.570042   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:36.583534   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:36.583565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:36.651765   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:36.651794   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:36.651808   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:36.732836   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:36.732870   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:32.495090   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.996253   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.926615   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:37.425790   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:36.377305   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:38.876443   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.274490   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:39.288528   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:39.288591   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:39.325560   79191 cri.go:89] found id: ""
	I0816 00:37:39.325582   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.325589   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:39.325599   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:39.325656   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:39.365795   79191 cri.go:89] found id: ""
	I0816 00:37:39.365822   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.365829   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:39.365837   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:39.365906   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:39.404933   79191 cri.go:89] found id: ""
	I0816 00:37:39.404961   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.404971   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:39.404977   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:39.405041   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:39.442712   79191 cri.go:89] found id: ""
	I0816 00:37:39.442736   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.442747   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:39.442754   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:39.442814   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:39.484533   79191 cri.go:89] found id: ""
	I0816 00:37:39.484557   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.484566   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:39.484573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:39.484636   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:39.522089   79191 cri.go:89] found id: ""
	I0816 00:37:39.522115   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.522125   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:39.522133   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:39.522194   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:39.557099   79191 cri.go:89] found id: ""
	I0816 00:37:39.557128   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.557138   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:39.557145   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:39.557205   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:39.594809   79191 cri.go:89] found id: ""
	I0816 00:37:39.594838   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.594849   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:39.594859   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:39.594874   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:39.611079   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:39.611110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:39.683156   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:39.683182   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:39.683198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:39.761198   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:39.761235   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:39.800972   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:39.801003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:37.494553   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.495854   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.427910   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.926445   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.376128   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:43.377791   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:42.354816   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:42.368610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:42.368673   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:42.404716   79191 cri.go:89] found id: ""
	I0816 00:37:42.404738   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.404745   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:42.404753   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:42.404798   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:42.441619   79191 cri.go:89] found id: ""
	I0816 00:37:42.441649   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.441660   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:42.441667   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:42.441726   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:42.480928   79191 cri.go:89] found id: ""
	I0816 00:37:42.480965   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.480976   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:42.480983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:42.481051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:42.519187   79191 cri.go:89] found id: ""
	I0816 00:37:42.519216   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.519226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:42.519234   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:42.519292   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:42.554928   79191 cri.go:89] found id: ""
	I0816 00:37:42.554956   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.554967   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:42.554974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:42.555035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:42.593436   79191 cri.go:89] found id: ""
	I0816 00:37:42.593472   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.593481   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:42.593487   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:42.593545   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:42.628078   79191 cri.go:89] found id: ""
	I0816 00:37:42.628101   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.628108   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:42.628113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:42.628172   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:42.662824   79191 cri.go:89] found id: ""
	I0816 00:37:42.662852   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.662862   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:42.662871   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:42.662888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:42.677267   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:42.677290   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:42.749570   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:42.749599   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:42.749615   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:42.831177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:42.831213   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:42.871928   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:42.871957   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.430704   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:45.444400   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:45.444461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:45.479503   79191 cri.go:89] found id: ""
	I0816 00:37:45.479529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.479537   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:45.479543   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:45.479596   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:45.518877   79191 cri.go:89] found id: ""
	I0816 00:37:45.518907   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.518917   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:45.518925   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:45.518992   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:45.553936   79191 cri.go:89] found id: ""
	I0816 00:37:45.553966   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.553977   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:45.553984   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:45.554035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:45.593054   79191 cri.go:89] found id: ""
	I0816 00:37:45.593081   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.593088   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:45.593095   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:45.593147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:45.631503   79191 cri.go:89] found id: ""
	I0816 00:37:45.631529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.631537   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:45.631543   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:45.631599   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:45.667435   79191 cri.go:89] found id: ""
	I0816 00:37:45.667459   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.667466   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:45.667473   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:45.667529   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:45.702140   79191 cri.go:89] found id: ""
	I0816 00:37:45.702168   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.702179   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:45.702187   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:45.702250   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:45.736015   79191 cri.go:89] found id: ""
	I0816 00:37:45.736048   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.736059   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:45.736070   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:45.736085   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:45.817392   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:45.817427   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:45.856421   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:45.856451   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.912429   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:45.912476   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:45.928411   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:45.928435   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:46.001141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:41.995835   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.497033   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.426414   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:46.927720   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:45.876721   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:47.877185   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.877396   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.501317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:48.515114   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:48.515190   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:48.553776   79191 cri.go:89] found id: ""
	I0816 00:37:48.553802   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.553810   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:48.553816   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:48.553890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:48.589760   79191 cri.go:89] found id: ""
	I0816 00:37:48.589786   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.589794   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:48.589800   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:48.589871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:48.629792   79191 cri.go:89] found id: ""
	I0816 00:37:48.629816   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.629825   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:48.629833   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:48.629898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:48.668824   79191 cri.go:89] found id: ""
	I0816 00:37:48.668852   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.668860   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:48.668866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:48.668930   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:48.704584   79191 cri.go:89] found id: ""
	I0816 00:37:48.704615   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.704626   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:48.704634   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:48.704691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:48.738833   79191 cri.go:89] found id: ""
	I0816 00:37:48.738855   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.738863   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:48.738868   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:48.738928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:48.774943   79191 cri.go:89] found id: ""
	I0816 00:37:48.774972   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.774981   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:48.774989   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:48.775051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:48.808802   79191 cri.go:89] found id: ""
	I0816 00:37:48.808825   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.808832   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:48.808841   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:48.808856   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:48.858849   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:48.858880   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:48.873338   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:48.873369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:48.950172   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:48.950195   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:48.950209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:49.038642   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:49.038679   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:51.581947   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:51.596612   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:51.596691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:51.631468   79191 cri.go:89] found id: ""
	I0816 00:37:51.631498   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.631509   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:51.631517   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:51.631577   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:51.666922   79191 cri.go:89] found id: ""
	I0816 00:37:51.666953   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.666963   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:51.666971   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:51.667034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:51.707081   79191 cri.go:89] found id: ""
	I0816 00:37:51.707109   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.707116   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:51.707122   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:51.707189   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:51.743884   79191 cri.go:89] found id: ""
	I0816 00:37:51.743912   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.743925   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:51.743932   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:51.743990   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:51.779565   79191 cri.go:89] found id: ""
	I0816 00:37:51.779595   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.779603   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:51.779610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:51.779658   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:46.994211   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.995446   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.495519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.426703   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.426947   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:53.427050   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:52.377050   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.877759   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.818800   79191 cri.go:89] found id: ""
	I0816 00:37:51.818824   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.818831   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:51.818837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:51.818899   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:51.855343   79191 cri.go:89] found id: ""
	I0816 00:37:51.855367   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.855374   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:51.855380   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:51.855426   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:51.890463   79191 cri.go:89] found id: ""
	I0816 00:37:51.890496   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.890505   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:51.890513   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:51.890526   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:51.977168   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:51.977209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:52.021626   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:52.021660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:52.076983   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:52.077027   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:52.092111   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:52.092142   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:52.172738   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:54.673192   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:54.688780   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.688853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.725279   79191 cri.go:89] found id: ""
	I0816 00:37:54.725308   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.725318   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:54.725325   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.725383   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:54.764326   79191 cri.go:89] found id: ""
	I0816 00:37:54.764353   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.764364   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:54.764372   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:54.764423   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:54.805221   79191 cri.go:89] found id: ""
	I0816 00:37:54.805252   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.805263   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:54.805270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:54.805334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:54.849724   79191 cri.go:89] found id: ""
	I0816 00:37:54.849750   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.849759   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:54.849765   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:54.849824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:54.894438   79191 cri.go:89] found id: ""
	I0816 00:37:54.894460   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.894468   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:54.894475   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:54.894532   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:54.933400   79191 cri.go:89] found id: ""
	I0816 00:37:54.933422   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.933431   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:54.933439   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:54.933497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:54.982249   79191 cri.go:89] found id: ""
	I0816 00:37:54.982277   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.982286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:54.982294   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:54.982353   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:55.024431   79191 cri.go:89] found id: ""
	I0816 00:37:55.024458   79191 logs.go:276] 0 containers: []
	W0816 00:37:55.024469   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:55.024479   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.024499   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:55.107089   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:55.107119   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:55.148949   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.148981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.202865   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.202902   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.218528   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.218556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:55.304995   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:53.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:55.995483   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.926671   78713 pod_ready.go:82] duration metric: took 4m0.007058537s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:37:54.926700   78713 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:37:54.926711   78713 pod_ready.go:39] duration metric: took 4m7.919515966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:37:54.926728   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:37:54.926764   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.926821   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.983024   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:54.983043   78713 cri.go:89] found id: ""
	I0816 00:37:54.983052   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:54.983103   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:54.988579   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.988644   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:55.035200   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.035231   78713 cri.go:89] found id: ""
	I0816 00:37:55.035241   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:55.035291   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.040701   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:55.040777   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:55.087306   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.087330   78713 cri.go:89] found id: ""
	I0816 00:37:55.087340   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:55.087422   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.092492   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:55.092560   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:55.144398   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.144424   78713 cri.go:89] found id: ""
	I0816 00:37:55.144433   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:55.144494   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.149882   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:55.149953   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:55.193442   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.193464   78713 cri.go:89] found id: ""
	I0816 00:37:55.193472   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:55.193528   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.198812   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:55.198886   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:55.238634   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.238656   78713 cri.go:89] found id: ""
	I0816 00:37:55.238666   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:55.238729   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.243141   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:55.243229   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:55.281414   78713 cri.go:89] found id: ""
	I0816 00:37:55.281439   78713 logs.go:276] 0 containers: []
	W0816 00:37:55.281449   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:55.281457   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:55.281519   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:55.319336   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.319357   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.319363   78713 cri.go:89] found id: ""
	I0816 00:37:55.319371   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:55.319431   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.323837   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.328777   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:55.328801   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.376259   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:55.376290   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.419553   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:55.419584   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.476026   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.476058   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.544263   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.544297   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.561818   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.561858   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:55.701342   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:55.701375   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:55.746935   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:55.746968   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.787200   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:55.787234   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.825257   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:55.825282   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.865569   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:55.865594   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.905234   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.905269   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:56.391175   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:56.391208   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:58.943163   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:58.961551   78713 api_server.go:72] duration metric: took 4m17.689832084s to wait for apiserver process to appear ...
	I0816 00:37:58.961592   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:37:58.961630   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:58.961697   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:59.001773   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.001794   78713 cri.go:89] found id: ""
	I0816 00:37:59.001803   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:59.001876   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.006168   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:59.006222   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:59.041625   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.041647   78713 cri.go:89] found id: ""
	I0816 00:37:59.041654   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:59.041715   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.046258   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:59.046323   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:59.086070   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.086089   78713 cri.go:89] found id: ""
	I0816 00:37:59.086097   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:59.086151   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.090556   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:59.090626   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:59.129889   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.129931   78713 cri.go:89] found id: ""
	I0816 00:37:59.129942   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:59.130008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.135694   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:59.135775   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.375656   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.375979   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:57.805335   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:57.819904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:57.819989   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:57.856119   79191 cri.go:89] found id: ""
	I0816 00:37:57.856146   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.856153   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:57.856160   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:57.856217   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:57.892797   79191 cri.go:89] found id: ""
	I0816 00:37:57.892825   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.892833   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:57.892841   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:57.892905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:57.928753   79191 cri.go:89] found id: ""
	I0816 00:37:57.928784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.928795   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:57.928803   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:57.928884   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:57.963432   79191 cri.go:89] found id: ""
	I0816 00:37:57.963462   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.963474   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:57.963481   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:57.963538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.998759   79191 cri.go:89] found id: ""
	I0816 00:37:57.998784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.998793   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:57.998801   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:57.998886   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:58.035262   79191 cri.go:89] found id: ""
	I0816 00:37:58.035288   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.035296   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:58.035303   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:58.035358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:58.071052   79191 cri.go:89] found id: ""
	I0816 00:37:58.071079   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.071087   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:58.071092   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:58.071150   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:58.110047   79191 cri.go:89] found id: ""
	I0816 00:37:58.110074   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.110083   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:58.110090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:58.110101   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:58.164792   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:58.164823   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:58.178742   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:58.178770   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:58.251861   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:58.251899   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:58.251921   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:58.329805   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:58.329859   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:00.872911   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:00.887914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:00.887986   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:00.925562   79191 cri.go:89] found id: ""
	I0816 00:38:00.925595   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.925606   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:00.925615   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:00.925669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:00.961476   79191 cri.go:89] found id: ""
	I0816 00:38:00.961498   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.961505   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:00.961510   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:00.961554   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:00.997575   79191 cri.go:89] found id: ""
	I0816 00:38:00.997599   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.997608   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:00.997616   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:00.997677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:01.035130   79191 cri.go:89] found id: ""
	I0816 00:38:01.035158   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.035169   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:01.035177   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:01.035232   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:01.073768   79191 cri.go:89] found id: ""
	I0816 00:38:01.073800   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.073811   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:01.073819   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:01.073898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:01.107904   79191 cri.go:89] found id: ""
	I0816 00:38:01.107928   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.107937   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:01.107943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:01.108004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:01.142654   79191 cri.go:89] found id: ""
	I0816 00:38:01.142690   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.142701   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:01.142709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:01.142766   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:01.187565   79191 cri.go:89] found id: ""
	I0816 00:38:01.187599   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.187610   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:01.187621   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:01.187635   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:01.265462   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:01.265493   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:01.265508   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:01.346988   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:01.347020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:01.390977   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:01.391006   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.443858   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:01.443892   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:57.996188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:00.495210   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.176702   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.176728   78713 cri.go:89] found id: ""
	I0816 00:37:59.176738   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:59.176799   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.182305   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:59.182387   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:59.223938   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.223960   78713 cri.go:89] found id: ""
	I0816 00:37:59.223968   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:59.224023   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.228818   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:59.228884   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:59.264566   78713 cri.go:89] found id: ""
	I0816 00:37:59.264589   78713 logs.go:276] 0 containers: []
	W0816 00:37:59.264597   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:59.264606   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:59.264654   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:59.302534   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.302560   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.302565   78713 cri.go:89] found id: ""
	I0816 00:37:59.302574   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:59.302621   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.307021   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.311258   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:59.311299   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:59.425542   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:59.425574   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.466078   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:59.466107   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:59.480894   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:59.480925   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.524790   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:59.524822   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.568832   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:59.568862   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.619399   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:59.619433   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.658616   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:59.658645   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.720421   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:59.720469   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.756558   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:59.756586   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.798650   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:59.798674   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:59.864280   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:59.864323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:59.913086   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:59.913118   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:02.828194   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:38:02.832896   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:38:02.834035   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:02.834059   78713 api_server.go:131] duration metric: took 3.87246001s to wait for apiserver health ...
	I0816 00:38:02.834067   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:02.834089   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:02.834145   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:02.873489   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:02.873512   78713 cri.go:89] found id: ""
	I0816 00:38:02.873521   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:38:02.873577   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.878807   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:02.878883   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:02.919930   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:02.919949   78713 cri.go:89] found id: ""
	I0816 00:38:02.919957   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:38:02.920008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.924459   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:02.924525   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:02.964609   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:02.964636   78713 cri.go:89] found id: ""
	I0816 00:38:02.964644   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:38:02.964697   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.968808   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:02.968921   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:03.017177   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.017201   78713 cri.go:89] found id: ""
	I0816 00:38:03.017210   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:38:03.017275   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.021905   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:03.021992   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:03.061720   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.061741   78713 cri.go:89] found id: ""
	I0816 00:38:03.061748   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:38:03.061801   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.066149   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:03.066206   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:03.107130   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.107149   78713 cri.go:89] found id: ""
	I0816 00:38:03.107156   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:38:03.107213   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.111323   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:03.111372   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:03.149906   78713 cri.go:89] found id: ""
	I0816 00:38:03.149927   78713 logs.go:276] 0 containers: []
	W0816 00:38:03.149934   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:03.149940   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:03.150000   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:03.190981   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.191007   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.191011   78713 cri.go:89] found id: ""
	I0816 00:38:03.191018   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:38:03.191066   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.195733   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.199755   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:03.199775   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:03.302209   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:38:03.302239   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:03.352505   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:38:03.352548   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.392296   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:38:03.392323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.448092   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:38:03.448130   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.487516   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:38:03.487541   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:03.541954   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:03.541989   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:03.557026   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:38:03.557049   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:03.602639   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:38:03.602670   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:03.642706   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:38:03.642733   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.683504   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:38:03.683530   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.721802   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:03.721826   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.089579   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.089621   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.376613   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:03.376837   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:06.679744   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:06.679797   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.679805   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.679812   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.679819   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.679825   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.679849   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.679861   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.679869   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.679878   78713 system_pods.go:74] duration metric: took 3.845804999s to wait for pod list to return data ...
	I0816 00:38:06.679886   78713 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:06.682521   78713 default_sa.go:45] found service account: "default"
	I0816 00:38:06.682553   78713 default_sa.go:55] duration metric: took 2.660224ms for default service account to be created ...
	I0816 00:38:06.682565   78713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:06.688149   78713 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:06.688178   78713 system_pods.go:89] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.688183   78713 system_pods.go:89] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.688187   78713 system_pods.go:89] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.688192   78713 system_pods.go:89] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.688196   78713 system_pods.go:89] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.688199   78713 system_pods.go:89] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.688206   78713 system_pods.go:89] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.688213   78713 system_pods.go:89] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.688220   78713 system_pods.go:126] duration metric: took 5.649758ms to wait for k8s-apps to be running ...
	I0816 00:38:06.688226   78713 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:06.688268   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:06.706263   78713 system_svc.go:56] duration metric: took 18.025675ms WaitForService to wait for kubelet
	I0816 00:38:06.706301   78713 kubeadm.go:582] duration metric: took 4m25.434584326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:06.706337   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:06.709536   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:06.709553   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:06.709565   78713 node_conditions.go:105] duration metric: took 3.213145ms to run NodePressure ...
	I0816 00:38:06.709576   78713 start.go:241] waiting for startup goroutines ...
	I0816 00:38:06.709582   78713 start.go:246] waiting for cluster config update ...
	I0816 00:38:06.709593   78713 start.go:255] writing updated cluster config ...
	I0816 00:38:06.709864   78713 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:06.755974   78713 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:06.757917   78713 out.go:177] * Done! kubectl is now configured to use "embed-certs-758469" cluster and "default" namespace by default
	I0816 00:38:03.959040   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:03.973674   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:03.973758   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:04.013606   79191 cri.go:89] found id: ""
	I0816 00:38:04.013653   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.013661   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:04.013667   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:04.013737   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:04.054558   79191 cri.go:89] found id: ""
	I0816 00:38:04.054590   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.054602   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:04.054609   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:04.054667   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:04.097116   79191 cri.go:89] found id: ""
	I0816 00:38:04.097143   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.097154   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:04.097162   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:04.097223   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:04.136770   79191 cri.go:89] found id: ""
	I0816 00:38:04.136798   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.136809   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:04.136816   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:04.136865   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:04.171906   79191 cri.go:89] found id: ""
	I0816 00:38:04.171929   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.171937   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:04.171943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:04.172004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:04.208694   79191 cri.go:89] found id: ""
	I0816 00:38:04.208725   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.208735   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:04.208744   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:04.208803   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:04.276713   79191 cri.go:89] found id: ""
	I0816 00:38:04.276744   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.276755   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:04.276763   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:04.276823   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:04.316646   79191 cri.go:89] found id: ""
	I0816 00:38:04.316669   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.316696   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:04.316707   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:04.316722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:04.329819   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:04.329864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:04.399032   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:04.399052   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:04.399080   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.487665   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:04.487698   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:04.530937   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.530962   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:02.496317   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:04.496477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:05.878535   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:08.377096   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:07.087584   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:07.102015   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:07.102086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:07.139530   79191 cri.go:89] found id: ""
	I0816 00:38:07.139559   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.139569   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:07.139577   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:07.139642   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:07.179630   79191 cri.go:89] found id: ""
	I0816 00:38:07.179659   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.179669   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:07.179675   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:07.179734   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:07.216407   79191 cri.go:89] found id: ""
	I0816 00:38:07.216435   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.216444   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:07.216449   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:07.216509   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:07.252511   79191 cri.go:89] found id: ""
	I0816 00:38:07.252536   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.252544   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:07.252551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:07.252613   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:07.288651   79191 cri.go:89] found id: ""
	I0816 00:38:07.288679   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.288689   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:07.288698   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:07.288757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:07.325910   79191 cri.go:89] found id: ""
	I0816 00:38:07.325963   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.325974   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:07.325982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:07.326046   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:07.362202   79191 cri.go:89] found id: ""
	I0816 00:38:07.362230   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.362244   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:07.362251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:07.362316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:07.405272   79191 cri.go:89] found id: ""
	I0816 00:38:07.405302   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.405313   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:07.405324   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:07.405339   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:07.461186   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:07.461222   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:07.475503   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:07.475544   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:07.555146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:07.555165   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:07.555179   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:07.635162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:07.635201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.174600   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:10.190418   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:10.190479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:10.251925   79191 cri.go:89] found id: ""
	I0816 00:38:10.251960   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.251969   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:10.251974   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:10.252027   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:10.289038   79191 cri.go:89] found id: ""
	I0816 00:38:10.289078   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.289088   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:10.289096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:10.289153   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:10.334562   79191 cri.go:89] found id: ""
	I0816 00:38:10.334591   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.334601   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:10.334609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:10.334669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:10.371971   79191 cri.go:89] found id: ""
	I0816 00:38:10.372000   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.372010   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:10.372018   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:10.372084   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:10.409654   79191 cri.go:89] found id: ""
	I0816 00:38:10.409685   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.409696   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:10.409703   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:10.409770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:10.446639   79191 cri.go:89] found id: ""
	I0816 00:38:10.446666   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.446675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:10.446683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:10.446750   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:10.483601   79191 cri.go:89] found id: ""
	I0816 00:38:10.483629   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.483641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:10.483648   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:10.483707   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:10.519640   79191 cri.go:89] found id: ""
	I0816 00:38:10.519670   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.519679   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:10.519690   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:10.519704   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:10.603281   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:10.603300   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:10.603311   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:10.689162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:10.689198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.730701   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:10.730724   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:10.780411   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:10.780441   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:06.997726   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:09.495539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.495753   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:10.876242   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.376332   78747 pod_ready.go:82] duration metric: took 4m0.006460655s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:38:11.376362   78747 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:38:11.376372   78747 pod_ready.go:39] duration metric: took 4m3.906659924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:38:11.376389   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:38:11.376416   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:11.376472   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:11.425716   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:11.425741   78747 cri.go:89] found id: ""
	I0816 00:38:11.425749   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:11.425804   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.431122   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:11.431195   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:11.468622   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:11.468647   78747 cri.go:89] found id: ""
	I0816 00:38:11.468657   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:11.468713   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.474270   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:11.474329   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:11.518448   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:11.518493   78747 cri.go:89] found id: ""
	I0816 00:38:11.518502   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:11.518569   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.524185   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:11.524242   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:11.561343   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:11.561367   78747 cri.go:89] found id: ""
	I0816 00:38:11.561374   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:11.561418   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.565918   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:11.565992   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:11.606010   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.606036   78747 cri.go:89] found id: ""
	I0816 00:38:11.606043   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:11.606097   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.610096   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:11.610166   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:11.646204   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:11.646229   78747 cri.go:89] found id: ""
	I0816 00:38:11.646238   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:11.646295   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.650405   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:11.650467   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:11.690407   78747 cri.go:89] found id: ""
	I0816 00:38:11.690436   78747 logs.go:276] 0 containers: []
	W0816 00:38:11.690446   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:11.690454   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:11.690510   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:11.736695   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:11.736722   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:11.736729   78747 cri.go:89] found id: ""
	I0816 00:38:11.736738   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:11.736803   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.741022   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.744983   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:11.745011   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.791452   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:11.791484   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:12.304425   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:12.304470   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:12.341318   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:12.341353   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:12.401425   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:12.401464   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:12.476598   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:12.476653   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:12.495594   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:12.495629   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:12.645961   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:12.645991   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:12.697058   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:12.697091   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:12.749085   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:12.749117   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:12.795786   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:12.795831   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:12.835928   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:12.835959   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:12.872495   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:12.872524   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:13.294689   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:13.308762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:13.308822   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:13.345973   79191 cri.go:89] found id: ""
	I0816 00:38:13.346004   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.346015   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:13.346022   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:13.346083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:13.382905   79191 cri.go:89] found id: ""
	I0816 00:38:13.382934   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.382945   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:13.382952   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:13.383001   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:13.417616   79191 cri.go:89] found id: ""
	I0816 00:38:13.417650   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.417662   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:13.417669   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:13.417739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:13.453314   79191 cri.go:89] found id: ""
	I0816 00:38:13.453350   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.453360   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:13.453368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:13.453435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:13.488507   79191 cri.go:89] found id: ""
	I0816 00:38:13.488536   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.488547   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:13.488555   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:13.488614   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:13.527064   79191 cri.go:89] found id: ""
	I0816 00:38:13.527095   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.527108   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:13.527116   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:13.527178   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:13.562838   79191 cri.go:89] found id: ""
	I0816 00:38:13.562867   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.562876   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:13.562882   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:13.562944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:13.598924   79191 cri.go:89] found id: ""
	I0816 00:38:13.598963   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.598974   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:13.598985   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:13.598999   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:13.651122   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:13.651156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:13.665255   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:13.665281   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:13.742117   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:13.742135   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:13.742148   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:13.824685   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:13.824719   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.366542   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:16.380855   79191 kubeadm.go:597] duration metric: took 4m3.665876253s to restartPrimaryControlPlane
	W0816 00:38:16.380919   79191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:38:16.380946   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:38:13.496702   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.996304   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.421355   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:15.437651   78747 api_server.go:72] duration metric: took 4m15.224557183s to wait for apiserver process to appear ...
	I0816 00:38:15.437677   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:38:15.437721   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:15.437782   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:15.473240   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:15.473265   78747 cri.go:89] found id: ""
	I0816 00:38:15.473273   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:15.473335   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.477666   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:15.477734   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:15.526073   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:15.526095   78747 cri.go:89] found id: ""
	I0816 00:38:15.526104   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:15.526165   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.530706   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:15.530775   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:15.571124   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:15.571149   78747 cri.go:89] found id: ""
	I0816 00:38:15.571159   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:15.571217   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.578613   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:15.578690   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:15.617432   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:15.617454   78747 cri.go:89] found id: ""
	I0816 00:38:15.617464   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:15.617529   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.621818   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:15.621899   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:15.658963   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:15.658981   78747 cri.go:89] found id: ""
	I0816 00:38:15.658988   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:15.659037   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.663170   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:15.663230   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:15.699297   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.699322   78747 cri.go:89] found id: ""
	I0816 00:38:15.699331   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:15.699388   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.704029   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:15.704085   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:15.742790   78747 cri.go:89] found id: ""
	I0816 00:38:15.742816   78747 logs.go:276] 0 containers: []
	W0816 00:38:15.742825   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:15.742830   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:15.742875   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:15.776898   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:15.776918   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:15.776922   78747 cri.go:89] found id: ""
	I0816 00:38:15.776945   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:15.777007   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.781511   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.785953   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:15.785981   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.840461   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:15.840498   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:16.320285   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:16.320323   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.362171   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:16.362200   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:16.444803   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:16.444834   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:16.461705   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:16.461732   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:16.576190   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:16.576220   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:16.626407   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:16.626449   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:16.673004   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:16.673036   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:16.724770   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:16.724797   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:16.764812   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:16.764838   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:16.804268   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:16.804300   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:16.841197   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:16.841221   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.380352   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:38:19.386760   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:38:19.387751   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:19.387773   78747 api_server.go:131] duration metric: took 3.950088801s to wait for apiserver health ...
	I0816 00:38:19.387781   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:19.387801   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:19.387843   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:19.429928   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:19.429952   78747 cri.go:89] found id: ""
	I0816 00:38:19.429961   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:19.430021   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.434822   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:19.434870   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:19.476789   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:19.476811   78747 cri.go:89] found id: ""
	I0816 00:38:19.476819   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:19.476869   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.481574   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:19.481640   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:19.528718   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:19.528742   78747 cri.go:89] found id: ""
	I0816 00:38:19.528750   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:19.528799   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.533391   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:19.533455   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:19.581356   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:19.581374   78747 cri.go:89] found id: ""
	I0816 00:38:19.581381   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:19.581427   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.585915   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:19.585977   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:19.623514   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:19.623544   78747 cri.go:89] found id: ""
	I0816 00:38:19.623552   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:19.623606   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.627652   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:19.627711   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:19.663933   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:19.663957   78747 cri.go:89] found id: ""
	I0816 00:38:19.663967   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:19.664032   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.668093   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:19.668162   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:19.707688   78747 cri.go:89] found id: ""
	I0816 00:38:19.707716   78747 logs.go:276] 0 containers: []
	W0816 00:38:19.707726   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:19.707741   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:19.707804   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:19.745900   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:19.745930   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.745935   78747 cri.go:89] found id: ""
	I0816 00:38:19.745944   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:19.746002   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.750934   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.755022   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:19.755044   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:19.807228   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:19.807257   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:19.918242   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:19.918274   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:21.772367   79191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.39139467s)
	I0816 00:38:21.772449   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:18.495150   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:20.995073   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:19.969165   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:19.969198   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:20.008945   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:20.008975   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:20.050080   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:20.050120   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:20.450059   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:20.450107   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:20.490694   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:20.490721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:20.532856   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:20.532890   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:20.609130   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:20.609178   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:20.624248   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:20.624279   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:20.675636   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:20.675669   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:20.716694   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:20.716721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:23.289748   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:23.289773   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.289778   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.289782   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.289786   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.289789   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.289792   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.289799   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.289814   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.289827   78747 system_pods.go:74] duration metric: took 3.902040304s to wait for pod list to return data ...
	I0816 00:38:23.289836   78747 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:23.293498   78747 default_sa.go:45] found service account: "default"
	I0816 00:38:23.293528   78747 default_sa.go:55] duration metric: took 3.671585ms for default service account to be created ...
	I0816 00:38:23.293539   78747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:23.298509   78747 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:23.298534   78747 system_pods.go:89] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.298540   78747 system_pods.go:89] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.298545   78747 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.298549   78747 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.298552   78747 system_pods.go:89] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.298556   78747 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.298561   78747 system_pods.go:89] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.298567   78747 system_pods.go:89] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.298576   78747 system_pods.go:126] duration metric: took 5.030455ms to wait for k8s-apps to be running ...
	I0816 00:38:23.298585   78747 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:23.298632   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:23.318383   78747 system_svc.go:56] duration metric: took 19.787836ms WaitForService to wait for kubelet
	I0816 00:38:23.318419   78747 kubeadm.go:582] duration metric: took 4m23.105331758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:23.318446   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:23.322398   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:23.322425   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:23.322436   78747 node_conditions.go:105] duration metric: took 3.985107ms to run NodePressure ...
	I0816 00:38:23.322447   78747 start.go:241] waiting for startup goroutines ...
	I0816 00:38:23.322454   78747 start.go:246] waiting for cluster config update ...
	I0816 00:38:23.322464   78747 start.go:255] writing updated cluster config ...
	I0816 00:38:23.322801   78747 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:23.374057   78747 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:23.376186   78747 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-616827" cluster and "default" namespace by default
	I0816 00:38:21.788969   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:38:21.800050   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:38:21.811193   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:38:21.811216   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:38:21.811260   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:38:21.821328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:38:21.821391   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:38:21.831777   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:38:21.841357   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:38:21.841424   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:38:21.851564   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.861262   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:38:21.861322   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.871929   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:38:21.881544   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:38:21.881595   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:38:21.891725   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:38:22.120640   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:38:22.997351   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:25.494851   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:27.494976   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:29.495248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:31.994586   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:33.995565   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:36.494547   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:38.495194   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:40.995653   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:42.996593   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:45.495409   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:47.496072   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:49.997645   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:52.496097   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:54.994390   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:56.995869   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:58.996230   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:01.495217   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:02.989403   78489 pod_ready.go:82] duration metric: took 4m0.001106911s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	E0816 00:39:02.989435   78489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 00:39:02.989456   78489 pod_ready.go:39] duration metric: took 4m14.547419665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:02.989488   78489 kubeadm.go:597] duration metric: took 4m21.799297957s to restartPrimaryControlPlane
	W0816 00:39:02.989550   78489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:39:02.989582   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:39:29.166109   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.176504479s)
	I0816 00:39:29.166193   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:29.188082   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:39:29.207577   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:39:29.230485   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:39:29.230510   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:39:29.230564   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:39:29.242106   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:39:29.242177   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:39:29.258756   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:39:29.272824   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:39:29.272896   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:39:29.285574   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.294909   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:39:29.294985   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.304843   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:39:29.315125   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:39:29.315173   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:39:29.325422   78489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:39:29.375775   78489 kubeadm.go:310] W0816 00:39:29.358885    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.376658   78489 kubeadm.go:310] W0816 00:39:29.359753    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.504337   78489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:39:38.219769   78489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 00:39:38.219865   78489 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:39:38.219968   78489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:39:38.220094   78489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:39:38.220215   78489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 00:39:38.220302   78489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:39:38.221971   78489 out.go:235]   - Generating certificates and keys ...
	I0816 00:39:38.222037   78489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:39:38.222119   78489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:39:38.222234   78489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:39:38.222316   78489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:39:38.222430   78489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:39:38.222509   78489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:39:38.222584   78489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:39:38.222684   78489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:39:38.222767   78489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:39:38.222831   78489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:39:38.222862   78489 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:39:38.222943   78489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:39:38.223035   78489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:39:38.223121   78489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 00:39:38.223212   78489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:39:38.223299   78489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:39:38.223355   78489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:39:38.223452   78489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:39:38.223534   78489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:39:38.225012   78489 out.go:235]   - Booting up control plane ...
	I0816 00:39:38.225086   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:39:38.225153   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:39:38.225211   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:39:38.225296   78489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:39:38.225366   78489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:39:38.225399   78489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:39:38.225542   78489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 00:39:38.225706   78489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 00:39:38.225803   78489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001324649s
	I0816 00:39:38.225917   78489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 00:39:38.226004   78489 kubeadm.go:310] [api-check] The API server is healthy after 5.001672205s
	I0816 00:39:38.226125   78489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 00:39:38.226267   78489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 00:39:38.226352   78489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 00:39:38.226537   78489 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-819398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 00:39:38.226620   78489 kubeadm.go:310] [bootstrap-token] Using token: 4qqrpj.xeaneqftblh8gcp3
	I0816 00:39:38.227962   78489 out.go:235]   - Configuring RBAC rules ...
	I0816 00:39:38.228060   78489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 00:39:38.228140   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 00:39:38.228290   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 00:39:38.228437   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 00:39:38.228558   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 00:39:38.228697   78489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 00:39:38.228877   78489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 00:39:38.228942   78489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 00:39:38.229000   78489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 00:39:38.229010   78489 kubeadm.go:310] 
	I0816 00:39:38.229086   78489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 00:39:38.229096   78489 kubeadm.go:310] 
	I0816 00:39:38.229160   78489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 00:39:38.229166   78489 kubeadm.go:310] 
	I0816 00:39:38.229186   78489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 00:39:38.229252   78489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 00:39:38.229306   78489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 00:39:38.229312   78489 kubeadm.go:310] 
	I0816 00:39:38.229361   78489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 00:39:38.229367   78489 kubeadm.go:310] 
	I0816 00:39:38.229403   78489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 00:39:38.229408   78489 kubeadm.go:310] 
	I0816 00:39:38.229447   78489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 00:39:38.229504   78489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 00:39:38.229562   78489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 00:39:38.229567   78489 kubeadm.go:310] 
	I0816 00:39:38.229636   78489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 00:39:38.229701   78489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 00:39:38.229707   78489 kubeadm.go:310] 
	I0816 00:39:38.229793   78489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.229925   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0816 00:39:38.229954   78489 kubeadm.go:310] 	--control-plane 
	I0816 00:39:38.229960   78489 kubeadm.go:310] 
	I0816 00:39:38.230029   78489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 00:39:38.230038   78489 kubeadm.go:310] 
	I0816 00:39:38.230109   78489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.230211   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0816 00:39:38.230223   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:39:38.230232   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:39:38.231742   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:39:38.233079   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:39:38.245435   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:39:38.269502   78489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:39:38.269566   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.269593   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-819398 minikube.k8s.io/updated_at=2024_08_16T00_39_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=no-preload-819398 minikube.k8s.io/primary=true
	I0816 00:39:38.304272   78489 ops.go:34] apiserver oom_adj: -16
	I0816 00:39:38.485643   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.986569   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.486177   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.985737   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.486311   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.985981   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.486071   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.986414   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.486292   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.603092   78489 kubeadm.go:1113] duration metric: took 4.333590575s to wait for elevateKubeSystemPrivileges
	I0816 00:39:42.603133   78489 kubeadm.go:394] duration metric: took 5m1.4690157s to StartCluster
	I0816 00:39:42.603158   78489 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.603258   78489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:39:42.604833   78489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.605072   78489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:39:42.605133   78489 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:39:42.605219   78489 addons.go:69] Setting storage-provisioner=true in profile "no-preload-819398"
	I0816 00:39:42.605254   78489 addons.go:234] Setting addon storage-provisioner=true in "no-preload-819398"
	I0816 00:39:42.605251   78489 addons.go:69] Setting default-storageclass=true in profile "no-preload-819398"
	I0816 00:39:42.605259   78489 addons.go:69] Setting metrics-server=true in profile "no-preload-819398"
	I0816 00:39:42.605295   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:39:42.605308   78489 addons.go:234] Setting addon metrics-server=true in "no-preload-819398"
	I0816 00:39:42.605309   78489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-819398"
	W0816 00:39:42.605320   78489 addons.go:243] addon metrics-server should already be in state true
	W0816 00:39:42.605266   78489 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:39:42.605355   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605370   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605697   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605717   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605731   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605735   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605777   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605837   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.606458   78489 out.go:177] * Verifying Kubernetes components...
	I0816 00:39:42.607740   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:39:42.622512   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0816 00:39:42.623130   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.623697   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.623720   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.624070   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.624666   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.624695   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.626221   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0816 00:39:42.626220   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0816 00:39:42.626608   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.626695   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.627158   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627179   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627329   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627346   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627490   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.627696   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.628049   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.628165   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.628189   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.632500   78489 addons.go:234] Setting addon default-storageclass=true in "no-preload-819398"
	W0816 00:39:42.632523   78489 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:39:42.632554   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.632897   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.632928   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.644779   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0816 00:39:42.645422   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.645995   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.646026   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.646395   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.646607   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.646960   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0816 00:39:42.647374   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.648126   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.648141   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.648471   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.649494   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.649732   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.651509   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.651600   78489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:39:42.652823   78489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:39:42.652936   78489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:42.652951   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:39:42.652970   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654197   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:39:42.654217   78489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:39:42.654234   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654380   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0816 00:39:42.654812   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.655316   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.655332   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.655784   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.656330   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.656356   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.659148   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659319   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659629   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659648   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659776   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659794   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659959   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660138   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660164   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660330   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660444   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660478   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660587   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.660583   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.674431   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0816 00:39:42.674827   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.675399   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.675420   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.675756   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.675993   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.677956   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.678195   78489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:42.678211   78489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:39:42.678230   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.681163   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681593   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.681615   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681916   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.682099   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.682197   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.682276   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.822056   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:39:42.840356   78489 node_ready.go:35] waiting up to 6m0s for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852864   78489 node_ready.go:49] node "no-preload-819398" has status "Ready":"True"
	I0816 00:39:42.852887   78489 node_ready.go:38] duration metric: took 12.497677ms for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852899   78489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:42.866637   78489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:42.908814   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:39:42.908832   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:39:42.949047   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:39:42.949070   78489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:39:42.959159   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:43.021536   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.021557   78489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:39:43.068214   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:43.082144   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.243834   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.243857   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244177   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244192   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.244201   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.244212   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244451   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244505   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.250358   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.250376   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.250608   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:43.250648   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.250656   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419115   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.350866587s)
	I0816 00:39:44.419166   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419175   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419519   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419545   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419542   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419561   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419573   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419824   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419836   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419851   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.436623   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.354435707s)
	I0816 00:39:44.436682   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.436697   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437131   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437150   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437160   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.437169   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437207   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.437495   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437517   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437528   78489 addons.go:475] Verifying addon metrics-server=true in "no-preload-819398"
	I0816 00:39:44.439622   78489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 00:39:44.441097   78489 addons.go:510] duration metric: took 1.835961958s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 00:39:44.878479   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:47.373009   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:49.380832   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:50.372883   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.372919   78489 pod_ready.go:82] duration metric: took 7.506242182s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.372933   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378463   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.378486   78489 pod_ready.go:82] duration metric: took 5.546402ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378496   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383347   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.383364   78489 pod_ready.go:82] duration metric: took 4.862995ms for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383374   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387672   78489 pod_ready.go:93] pod "kube-proxy-nl7g6" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.387693   78489 pod_ready.go:82] duration metric: took 4.312811ms for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387703   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391921   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.391939   78489 pod_ready.go:82] duration metric: took 4.229092ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391945   78489 pod_ready.go:39] duration metric: took 7.539034647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:50.391958   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:39:50.392005   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:39:50.407980   78489 api_server.go:72] duration metric: took 7.802877941s to wait for apiserver process to appear ...
	I0816 00:39:50.408017   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:39:50.408039   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:39:50.412234   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:39:50.413278   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:39:50.413297   78489 api_server.go:131] duration metric: took 5.273051ms to wait for apiserver health ...
	I0816 00:39:50.413304   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:39:50.573185   78489 system_pods.go:59] 9 kube-system pods found
	I0816 00:39:50.573226   78489 system_pods.go:61] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.573233   78489 system_pods.go:61] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.573239   78489 system_pods.go:61] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.573244   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.573250   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.573257   78489 system_pods.go:61] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.573262   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.573271   78489 system_pods.go:61] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.573278   78489 system_pods.go:61] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.573288   78489 system_pods.go:74] duration metric: took 159.97729ms to wait for pod list to return data ...
	I0816 00:39:50.573301   78489 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:39:50.771164   78489 default_sa.go:45] found service account: "default"
	I0816 00:39:50.771189   78489 default_sa.go:55] duration metric: took 197.881739ms for default service account to be created ...
	I0816 00:39:50.771198   78489 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:39:50.973415   78489 system_pods.go:86] 9 kube-system pods found
	I0816 00:39:50.973448   78489 system_pods.go:89] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.973453   78489 system_pods.go:89] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.973457   78489 system_pods.go:89] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.973461   78489 system_pods.go:89] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.973465   78489 system_pods.go:89] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.973468   78489 system_pods.go:89] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.973471   78489 system_pods.go:89] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.973477   78489 system_pods.go:89] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.973482   78489 system_pods.go:89] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.973491   78489 system_pods.go:126] duration metric: took 202.288008ms to wait for k8s-apps to be running ...
	I0816 00:39:50.973498   78489 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:39:50.973539   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:50.989562   78489 system_svc.go:56] duration metric: took 16.053781ms WaitForService to wait for kubelet
	I0816 00:39:50.989595   78489 kubeadm.go:582] duration metric: took 8.384495377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:39:50.989618   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:39:51.171076   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:39:51.171109   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:39:51.171120   78489 node_conditions.go:105] duration metric: took 181.496732ms to run NodePressure ...
	I0816 00:39:51.171134   78489 start.go:241] waiting for startup goroutines ...
	I0816 00:39:51.171144   78489 start.go:246] waiting for cluster config update ...
	I0816 00:39:51.171157   78489 start.go:255] writing updated cluster config ...
	I0816 00:39:51.171465   78489 ssh_runner.go:195] Run: rm -f paused
	I0816 00:39:51.220535   78489 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:39:51.223233   78489 out.go:177] * Done! kubectl is now configured to use "no-preload-819398" cluster and "default" namespace by default
	I0816 00:40:18.143220   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:40:18.143333   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:40:18.144757   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.144804   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.144888   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.145018   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.145134   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:18.145210   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:18.146791   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:18.146879   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:18.146965   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:18.147072   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:18.147164   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:18.147258   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:18.147340   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:18.147434   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:18.147525   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:18.147613   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:18.147708   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:18.147744   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:18.147791   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:18.147839   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:18.147916   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:18.147989   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:18.148045   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:18.148194   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:18.148318   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:18.148365   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:18.148458   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:18.149817   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:18.149941   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:18.150044   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:18.150107   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:18.150187   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:18.150323   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:40:18.150380   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:40:18.150460   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150671   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.150766   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150953   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151033   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151232   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151305   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151520   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151614   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151840   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151856   79191 kubeadm.go:310] 
	I0816 00:40:18.151917   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:40:18.151978   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:40:18.151992   79191 kubeadm.go:310] 
	I0816 00:40:18.152046   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:40:18.152097   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:40:18.152204   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:40:18.152218   79191 kubeadm.go:310] 
	I0816 00:40:18.152314   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:40:18.152349   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:40:18.152377   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:40:18.152384   79191 kubeadm.go:310] 
	I0816 00:40:18.152466   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:40:18.152537   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:40:18.152543   79191 kubeadm.go:310] 
	I0816 00:40:18.152674   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:40:18.152769   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:40:18.152853   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:40:18.152914   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:40:18.152978   79191 kubeadm.go:310] 
	W0816 00:40:18.153019   79191 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:40:18.153055   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:40:18.634058   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:40:18.648776   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:40:18.659504   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:40:18.659529   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:40:18.659584   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:40:18.670234   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:40:18.670285   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:40:18.680370   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:40:18.689496   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:40:18.689557   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:40:18.698949   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.708056   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:40:18.708118   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.718261   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:40:18.728708   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:40:18.728777   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:40:18.739253   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:40:18.819666   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.819746   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.966568   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.966704   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.966868   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:19.168323   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:19.170213   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:19.170335   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:19.170464   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:19.170546   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:19.170598   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:19.170670   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:19.170740   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:19.170828   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:19.170924   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:19.171031   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:19.171129   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:19.171179   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:19.171261   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:19.421256   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:19.585260   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:19.672935   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:19.928620   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:19.952420   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:19.953527   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:19.953578   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:20.090384   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:20.092904   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:20.093037   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:20.105743   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:20.106980   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:20.108199   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:20.111014   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:41:00.113053   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:41:00.113479   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:00.113752   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:05.113795   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:05.114091   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:15.114695   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:15.114932   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:35.116019   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:35.116207   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.116728   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:42:15.116994   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.117018   79191 kubeadm.go:310] 
	I0816 00:42:15.117071   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:42:15.117136   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:42:15.117147   79191 kubeadm.go:310] 
	I0816 00:42:15.117198   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:42:15.117248   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:42:15.117402   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:42:15.117412   79191 kubeadm.go:310] 
	I0816 00:42:15.117543   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:42:15.117601   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:42:15.117636   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:42:15.117644   79191 kubeadm.go:310] 
	I0816 00:42:15.117778   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:42:15.117918   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:42:15.117929   79191 kubeadm.go:310] 
	I0816 00:42:15.118083   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:42:15.118215   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:42:15.118313   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:42:15.118412   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:42:15.118433   79191 kubeadm.go:310] 
	I0816 00:42:15.118582   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:42:15.118698   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:42:15.118843   79191 kubeadm.go:394] duration metric: took 8m2.460648867s to StartCluster
	I0816 00:42:15.118855   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:42:15.118891   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:42:15.118957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:42:15.162809   79191 cri.go:89] found id: ""
	I0816 00:42:15.162837   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.162848   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:42:15.162855   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:42:15.162925   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:42:15.198020   79191 cri.go:89] found id: ""
	I0816 00:42:15.198042   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.198053   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:42:15.198063   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:42:15.198132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:42:15.238168   79191 cri.go:89] found id: ""
	I0816 00:42:15.238197   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.238206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:42:15.238213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:42:15.238273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:42:15.278364   79191 cri.go:89] found id: ""
	I0816 00:42:15.278391   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.278401   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:42:15.278407   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:42:15.278465   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:42:15.316182   79191 cri.go:89] found id: ""
	I0816 00:42:15.316209   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.316216   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:42:15.316222   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:42:15.316278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:42:15.352934   79191 cri.go:89] found id: ""
	I0816 00:42:15.352962   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.352970   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:42:15.352976   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:42:15.353031   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:42:15.388940   79191 cri.go:89] found id: ""
	I0816 00:42:15.388966   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.388973   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:42:15.388983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:42:15.389042   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:42:15.424006   79191 cri.go:89] found id: ""
	I0816 00:42:15.424035   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.424043   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:42:15.424054   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:42:15.424073   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:42:15.504823   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:42:15.504846   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:42:15.504858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:42:15.608927   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:42:15.608959   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:42:15.676785   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:42:15.676810   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:42:15.744763   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:42:15.744805   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0816 00:42:15.760944   79191 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:42:15.761012   79191 out.go:270] * 
	W0816 00:42:15.761078   79191 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.761098   79191 out.go:270] * 
	W0816 00:42:15.762220   79191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:42:15.765697   79191 out.go:201] 
	W0816 00:42:15.766942   79191 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.767018   79191 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:42:15.767040   79191 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:42:15.768526   79191 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.111154300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769481111124865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93f3f41a-97b0-45c8-bb24-43d074e7a9aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.112010545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d55b5f5b-5b4f-4f55-b280-48ca5e8f1fa5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.112067466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d55b5f5b-5b4f-4f55-b280-48ca5e8f1fa5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.112099238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d55b5f5b-5b4f-4f55-b280-48ca5e8f1fa5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.149656358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd2d65ef-a5da-47b4-98b6-3cc9571a58d0 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.149732543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd2d65ef-a5da-47b4-98b6-3cc9571a58d0 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.152360867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=412c5a8e-74dd-4882-9a04-f02cf2dbeccf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.152884228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769481152856834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=412c5a8e-74dd-4882-9a04-f02cf2dbeccf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.153527250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17afa7b3-e55f-40c0-95b2-cc16f299fba9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.153580413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17afa7b3-e55f-40c0-95b2-cc16f299fba9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.153627193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=17afa7b3-e55f-40c0-95b2-cc16f299fba9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.187629207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0db2d147-bc2e-4c2d-b3cf-e436d311a5b0 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.187707061Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0db2d147-bc2e-4c2d-b3cf-e436d311a5b0 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.189196819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fbdaa12-7957-4c39-8a4a-0d6d68da1432 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.189657534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769481189634276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fbdaa12-7957-4c39-8a4a-0d6d68da1432 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.190192664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c858e1ac-bf6e-4334-877b-845c20a9ba9e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.190249175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c858e1ac-bf6e-4334-877b-845c20a9ba9e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.190279919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c858e1ac-bf6e-4334-877b-845c20a9ba9e name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.222894184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5eb7d3a2-32bf-4051-8bfe-8009ab56fb2e name=/runtime.v1.RuntimeService/Version
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.222975620Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5eb7d3a2-32bf-4051-8bfe-8009ab56fb2e name=/runtime.v1.RuntimeService/Version
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.224402880Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cdbbbcb-4959-47ba-8242-b97aa39e4d0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.224891279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769481224865365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cdbbbcb-4959-47ba-8242-b97aa39e4d0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.225538391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fca05b7a-14c9-4f56-a38d-aeb5a6228d7d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.225592443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fca05b7a-14c9-4f56-a38d-aeb5a6228d7d name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:51:21 old-k8s-version-098619 crio[650]: time="2024-08-16 00:51:21.225628320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fca05b7a-14c9-4f56-a38d-aeb5a6228d7d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 00:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055820] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042316] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.997792] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.610931] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.386268] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 00:34] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.149906] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.218773] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.113453] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.292715] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.582198] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.063869] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.975940] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	[ +13.278959] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 00:38] systemd-fstab-generator[5083]: Ignoring "noauto" option for root device
	[Aug16 00:40] systemd-fstab-generator[5363]: Ignoring "noauto" option for root device
	[  +0.062259] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:51:21 up 17 min,  0 users,  load average: 0.00, 0.02, 0.03
	Linux old-k8s-version-098619 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0009f86f0)
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c37ef0, 0x4f0ac20, 0xc0004e5e00, 0x1, 0xc0001000c0)
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000894c40, 0xc0001000c0)
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009e2540, 0xc000a10b40)
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6525]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 16 00:51:16 old-k8s-version-098619 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 16 00:51:16 old-k8s-version-098619 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 16 00:51:16 old-k8s-version-098619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 16 00:51:16 old-k8s-version-098619 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 16 00:51:16 old-k8s-version-098619 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6534]: I0816 00:51:16.952569    6534 server.go:416] Version: v1.20.0
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6534]: I0816 00:51:16.952909    6534 server.go:837] Client rotation is on, will bootstrap in background
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6534]: I0816 00:51:16.955958    6534 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6534]: I0816 00:51:16.957309    6534 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 00:51:16 old-k8s-version-098619 kubelet[6534]: W0816 00:51:16.957377    6534 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 2 (261.380667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-098619" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (427.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-758469 -n embed-certs-758469
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 00:54:17.400043942 +0000 UTC m=+6531.980622916
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-758469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-758469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.947µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-758469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-758469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-758469 logs -n 25: (1.149170372s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-067133 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | disable-driver-mounts-067133                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:25 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-819398             | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC | 16 Aug 24 00:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-758469            | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-616827  | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:52 UTC | 16 Aug 24 00:53 UTC |
	| start   | -p newest-cni-504758 --memory=2200 --alsologtostderr   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-504758             | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-504758                                   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-504758                  | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-504758 --memory=2200 --alsologtostderr   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:54 UTC | 16 Aug 24 00:54 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:53:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:53:59.873204   85810 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:53:59.873418   85810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:53:59.873432   85810 out.go:358] Setting ErrFile to fd 2...
	I0816 00:53:59.873480   85810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:53:59.874000   85810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:53:59.875082   85810 out.go:352] Setting JSON to false
	I0816 00:53:59.876025   85810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9340,"bootTime":1723760300,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:53:59.876086   85810 start.go:139] virtualization: kvm guest
	I0816 00:53:59.877808   85810 out.go:177] * [newest-cni-504758] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:53:59.879412   85810 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:53:59.879465   85810 notify.go:220] Checking for updates...
	I0816 00:53:59.881689   85810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:53:59.882889   85810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:53:59.883990   85810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:53:59.885260   85810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:53:59.886614   85810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:53:59.888418   85810 config.go:182] Loaded profile config "newest-cni-504758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:53:59.889045   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.889129   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.903823   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
	I0816 00:53:59.904331   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:53:59.904868   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:53:59.904885   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:53:59.905186   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:53:59.905427   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:53:59.905682   85810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:53:59.906014   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.906063   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.920597   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0816 00:53:59.921028   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:53:59.921558   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:53:59.921585   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:53:59.921878   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:53:59.922101   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:53:59.960476   85810 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:53:59.961667   85810 start.go:297] selected driver: kvm2
	I0816 00:53:59.961693   85810 start.go:901] validating driver "kvm2" against &{Name:newest-cni-504758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-504758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:53:59.961862   85810 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:53:59.962612   85810 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:53:59.962691   85810 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:53:59.979820   85810 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:53:59.980325   85810 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 00:53:59.980411   85810 cni.go:84] Creating CNI manager for ""
	I0816 00:53:59.980428   85810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:53:59.980498   85810 start.go:340] cluster config:
	{Name:newest-cni-504758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-504758 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:53:59.980658   85810 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:53:59.982679   85810 out.go:177] * Starting "newest-cni-504758" primary control-plane node in "newest-cni-504758" cluster
	I0816 00:53:59.983985   85810 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:53:59.984028   85810 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:53:59.984040   85810 cache.go:56] Caching tarball of preloaded images
	I0816 00:53:59.984135   85810 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:53:59.984149   85810 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 00:53:59.984291   85810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/config.json ...
	I0816 00:53:59.984554   85810 start.go:360] acquireMachinesLock for newest-cni-504758: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:53:59.984616   85810 start.go:364] duration metric: took 33.654µs to acquireMachinesLock for "newest-cni-504758"
	I0816 00:53:59.984636   85810 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:53:59.984645   85810 fix.go:54] fixHost starting: 
	I0816 00:53:59.985031   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.985076   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.999635   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35639
	I0816 00:54:00.000153   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:00.000802   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:00.000828   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:00.001196   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:00.001416   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:00.001618   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetState
	I0816 00:54:00.003393   85810 fix.go:112] recreateIfNeeded on newest-cni-504758: state=Stopped err=<nil>
	I0816 00:54:00.003432   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	W0816 00:54:00.003592   85810 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:54:00.005439   85810 out.go:177] * Restarting existing kvm2 VM for "newest-cni-504758" ...
	I0816 00:54:00.006544   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Start
	I0816 00:54:00.006728   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring networks are active...
	I0816 00:54:00.007518   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring network default is active
	I0816 00:54:00.007916   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring network mk-newest-cni-504758 is active
	I0816 00:54:00.008361   85810 main.go:141] libmachine: (newest-cni-504758) Getting domain xml...
	I0816 00:54:00.009155   85810 main.go:141] libmachine: (newest-cni-504758) Creating domain...
	I0816 00:54:01.244893   85810 main.go:141] libmachine: (newest-cni-504758) Waiting to get IP...
	I0816 00:54:01.245724   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.246226   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.246290   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.246206   85845 retry.go:31] will retry after 269.680632ms: waiting for machine to come up
	I0816 00:54:01.517881   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.518403   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.518423   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.518361   85845 retry.go:31] will retry after 274.232355ms: waiting for machine to come up
	I0816 00:54:01.793786   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.794319   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.794348   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.794273   85845 retry.go:31] will retry after 416.170581ms: waiting for machine to come up
	I0816 00:54:02.212494   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:02.212959   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:02.213002   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:02.212904   85845 retry.go:31] will retry after 465.478219ms: waiting for machine to come up
	I0816 00:54:02.679458   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:02.679920   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:02.679955   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:02.679889   85845 retry.go:31] will retry after 748.437183ms: waiting for machine to come up
	I0816 00:54:03.429734   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:03.430251   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:03.430274   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:03.430199   85845 retry.go:31] will retry after 895.520052ms: waiting for machine to come up
	I0816 00:54:04.326808   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:04.327193   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:04.327219   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:04.327161   85845 retry.go:31] will retry after 754.604111ms: waiting for machine to come up
	I0816 00:54:05.083593   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:05.084040   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:05.084077   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:05.083992   85845 retry.go:31] will retry after 966.654738ms: waiting for machine to come up
	I0816 00:54:06.052390   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:06.052967   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:06.052994   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:06.052911   85845 retry.go:31] will retry after 1.600341812s: waiting for machine to come up
	I0816 00:54:07.655341   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:07.656004   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:07.656029   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:07.655958   85845 retry.go:31] will retry after 2.147103051s: waiting for machine to come up
	I0816 00:54:09.805659   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:09.806134   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:09.806167   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:09.806085   85845 retry.go:31] will retry after 2.057779929s: waiting for machine to come up
	I0816 00:54:11.866329   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:11.866766   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:11.866811   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:11.866746   85845 retry.go:31] will retry after 2.944636547s: waiting for machine to come up
	I0816 00:54:14.813343   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:14.813896   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:14.813924   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:14.813818   85845 retry.go:31] will retry after 4.129457164s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.035122428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69955081-b9ff-4545-9f23-7c976fb485a7 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.036427188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab22a15f-91f2-4323-a8db-1044975abb30 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.036828636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769658036807504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab22a15f-91f2-4323-a8db-1044975abb30 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.037981988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73cfc1ae-f0c8-4d21-9ad0-4eddb95e5924 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.038105638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73cfc1ae-f0c8-4d21-9ad0-4eddb95e5924 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.038962086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768447843336160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901436142b66005d7e7eeec98b2fd068f1d3c25b0fd7ac6ead4d82f112ac935a,PodSandboxId:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768425903172036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5,PodSandboxId:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768424751583816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768417007007341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110,PodSandboxId:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768416996249241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13a
b06,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3,PodSandboxId:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768413361960077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a,PodSandboxId:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768413269749088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2,PodSandboxId:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768413335009325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6,PodSandboxId:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768413231619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73cfc1ae-f0c8-4d21-9ad0-4eddb95e5924 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.081723460Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b13c528-7366-4969-be33-eaac90ae9891 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.081812431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b13c528-7366-4969-be33-eaac90ae9891 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.087862257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b13ca10d-36a0-4249-a6cf-748a0828773c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.088351129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769658088322819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b13ca10d-36a0-4249-a6cf-748a0828773c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.089187232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f702bfb4-1eab-4908-b764-8030c474abeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.089265444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f702bfb4-1eab-4908-b764-8030c474abeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.089483699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768447843336160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901436142b66005d7e7eeec98b2fd068f1d3c25b0fd7ac6ead4d82f112ac935a,PodSandboxId:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768425903172036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5,PodSandboxId:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768424751583816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768417007007341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110,PodSandboxId:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768416996249241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13a
b06,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3,PodSandboxId:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768413361960077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a,PodSandboxId:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768413269749088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2,PodSandboxId:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768413335009325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6,PodSandboxId:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768413231619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f702bfb4-1eab-4908-b764-8030c474abeb name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.100247759Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=bc3e855b-75a2-4789-906d-d2a56d13bda0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.100466329Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-54gqb,Uid:6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723768424450076605,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T00:33:36.553665349Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&PodSandboxMetadata{Name:busybox,Uid:1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,Namespace:default,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1723768424449425328,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T00:33:36.553669183Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab9ea4096453e45e17f387ad6470345e984489fefd555a241394fa3b7a84c546,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-pnmsm,Uid:1fb83d03-46c2-4455-9455-e35c0a968ff1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723768422660714648,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-pnmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb83d03-46c2-4455-9455-e35c0a968ff1,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-16T00:33:36.
553668112Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:caae6cfe-efca-4626-95d1-321af01f2095,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723768416869308485,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-16T00:33:36.553664183Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&PodSandboxMetadata{Name:kube-proxy-4xc89,Uid:04b4bb32-a0cf-4147-957d-83b3ed13ab06,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723768416868063305,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13ab06,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2024-08-16T00:33:36.553661370Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-758469,Uid:eddfb14b1026513b97fb9b58c31b967d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723768413087410648,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eddfb14b1026513b97fb9b58c31b967d,kubernetes.io/config.seen: 2024-08-16T00:33:32.557727883Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-758469,Uid:86f559d81bdb4acc95208893e11d87e1,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723768413072744999,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.185:2379,kubernetes.io/config.hash: 86f559d81bdb4acc95208893e11d87e1,kubernetes.io/config.seen: 2024-08-16T00:33:32.585262581Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-758469,Uid:445cf946cdc1d4e383a184c067c48f41,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723768413059067843,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 445cf946cdc1d4e383a184c067c48f41,kubernetes.io/config.seen: 2024-08-16T00:33:32.557729103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-758469,Uid:e260ccf04023759b027fb8adcd82425b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723768413047674472,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e260ccf04023759b027fb8adcd
82425b,kubernetes.io/config.seen: 2024-08-16T00:33:32.557724030Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bc3e855b-75a2-4789-906d-d2a56d13bda0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.101414960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7bbc634-ffcf-45cf-96e1-09ad785e4b55 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.101466142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7bbc634-ffcf-45cf-96e1-09ad785e4b55 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.101646580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768447843336160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901436142b66005d7e7eeec98b2fd068f1d3c25b0fd7ac6ead4d82f112ac935a,PodSandboxId:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768425903172036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5,PodSandboxId:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768424751583816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768417007007341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110,PodSandboxId:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768416996249241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13a
b06,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3,PodSandboxId:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768413361960077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a,PodSandboxId:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768413269749088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2,PodSandboxId:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768413335009325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6,PodSandboxId:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768413231619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7bbc634-ffcf-45cf-96e1-09ad785e4b55 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.128152159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d87a6872-2093-4d42-9714-3e6804d47f04 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.128225030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d87a6872-2093-4d42-9714-3e6804d47f04 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.129430513Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee781d3b-73fd-44d4-84d8-5ce4af5ced86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.130278130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769658130246520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee781d3b-73fd-44d4-84d8-5ce4af5ced86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.130807558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e62f1ac-9dce-418c-bccf-ebd480c87ef6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.130880698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e62f1ac-9dce-418c-bccf-ebd480c87ef6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:18 embed-certs-758469 crio[728]: time="2024-08-16 00:54:18.131230881Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768447843336160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:901436142b66005d7e7eeec98b2fd068f1d3c25b0fd7ac6ead4d82f112ac935a,PodSandboxId:342f73cb40d64d7bc8cda9c88be481ae9cf08f80c727484d4a17564d0d665388,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768425903172036,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1eb1c3b9-67a8-462a-a1f7-df1af9e610cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5,PodSandboxId:173fab85479db6a9c5c09041d2687b6a1e849983052a937ef313149cebd29482,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768424751583816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-54gqb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da,PodSandboxId:9fda6f0a2567dbd866634d2435e7a8cb31c6273ea287b9c59f6de912877705ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768417007007341,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
caae6cfe-efca-4626-95d1-321af01f2095,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110,PodSandboxId:af1a2b4ddcaabb6cafc78819724fb23547ff7912af880f3bb4bd54f0e24c8874,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768416996249241,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xc89,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04b4bb32-a0cf-4147-957d-83b3ed13a
b06,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3,PodSandboxId:f3769b8ad536eb3a2ef92088c92a36aff93f3f173a5e4f9ee7b524f5edc8969a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768413361960077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddfb14b1026513b97fb9b58c31b967d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a,PodSandboxId:5ee83674c575a20f37423399a14e074d4d2c922943932a22b5d75b2538c21ea9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768413269749088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f559d81bdb4acc95208893e11d87e1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2,PodSandboxId:ea1c3acb4de0ebf14d64e96b76d2ee29e8aaace0d900089476a8ad91633f020e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768413335009325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e260ccf04023759b027fb8adcd82425b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6,PodSandboxId:1e623530187b473822202607a845eaa268bc860e2b04d928cf6132e81631741b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768413231619222,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-758469,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 445cf946cdc1d4e383a184c067c48f41,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e62f1ac-9dce-418c-bccf-ebd480c87ef6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2ba9e1d7af63a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   9fda6f0a2567d       storage-provisioner
	901436142b660       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   342f73cb40d64       busybox
	8ecab8c44d72a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   173fab85479db       coredns-6f6b679f8f-54gqb
	a14a1aef37ee3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   9fda6f0a2567d       storage-provisioner
	513d50297bc22       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      20 minutes ago      Running             kube-proxy                1                   af1a2b4ddcaab       kube-proxy-4xc89
	dcadfb0e98975       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      20 minutes ago      Running             kube-scheduler            1                   f3769b8ad536e       kube-scheduler-embed-certs-758469
	2cc2751644145       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      20 minutes ago      Running             kube-controller-manager   1                   ea1c3acb4de0e       kube-controller-manager-embed-certs-758469
	a23eed518f172       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   5ee83674c575a       etcd-embed-certs-758469
	a17b85fff4759       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      20 minutes ago      Running             kube-apiserver            1                   1e623530187b4       kube-apiserver-embed-certs-758469
	
	
	==> coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60039 - 64859 "HINFO IN 4609580037883277511.2890640239383133867. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010570491s
	
	
	==> describe nodes <==
	Name:               embed-certs-758469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-758469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=embed-certs-758469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T00_25_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 00:25:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-758469
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:54:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 00:49:25 +0000   Fri, 16 Aug 2024 00:25:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 00:49:25 +0000   Fri, 16 Aug 2024 00:25:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 00:49:25 +0000   Fri, 16 Aug 2024 00:25:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 00:49:25 +0000   Fri, 16 Aug 2024 00:33:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    embed-certs-758469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3465190e779743bea5b334f70d6b0148
	  System UUID:                3465190e-7797-43be-a5b3-34f70d6b0148
	  Boot ID:                    b88915e2-7fd1-43d6-ad03-378a0e00fe29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-54gqb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-embed-certs-758469                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-758469             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-758469    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-4xc89                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-embed-certs-758469             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-pnmsm               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-758469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-758469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-758469 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-758469 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-758469 event: Registered Node embed-certs-758469 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-758469 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-758469 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-758469 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-758469 event: Registered Node embed-certs-758469 in Controller
	
	
	==> dmesg <==
	[Aug16 00:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050703] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039351] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.791649] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.495460] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.613096] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.316444] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.054722] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061271] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.165924] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.158384] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.306593] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.266557] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.061482] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.321338] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +4.593739] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.428528] systemd-fstab-generator[1558]: Ignoring "noauto" option for root device
	[  +1.312171] kauditd_printk_skb: 64 callbacks suppressed
	[ +11.806439] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] <==
	{"level":"info","ts":"2024-08-16T00:33:35.109383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:35.109408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-08-16T00:33:35.110970Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:embed-certs-758469 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T00:33:35.111010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:33:35.111091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:33:35.111600Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T00:33:35.111653Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T00:33:35.112836Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:33:35.113926Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-08-16T00:33:35.112884Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:33:35.115414Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-16T00:33:51.572570Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.556854ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814266034637402784 > lease_revoke:<id:192d915892f7e604>","response":"size:29"}
	{"level":"warn","ts":"2024-08-16T00:33:51.773676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.941655ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814266034637402785 > lease_revoke:<id:192d915892f7e5aa>","response":"size:29"}
	{"level":"info","ts":"2024-08-16T00:33:51.773750Z","caller":"traceutil/trace.go:171","msg":"trace[573502085] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:641; }","duration":"266.568829ms","start":"2024-08-16T00:33:51.507171Z","end":"2024-08-16T00:33:51.773740Z","steps":["trace[573502085] 'read index received'  (duration: 22.312µs)","trace[573502085] 'applied index is now lower than readState.Index'  (duration: 266.545686ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T00:33:51.773941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.710804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-6f6b679f8f-54gqb\" ","response":"range_response_count:1 size:5042"}
	{"level":"info","ts":"2024-08-16T00:33:51.773976Z","caller":"traceutil/trace.go:171","msg":"trace[428233582] range","detail":"{range_begin:/registry/pods/kube-system/coredns-6f6b679f8f-54gqb; range_end:; response_count:1; response_revision:602; }","duration":"266.806588ms","start":"2024-08-16T00:33:51.507163Z","end":"2024-08-16T00:33:51.773970Z","steps":["trace[428233582] 'agreement among raft nodes before linearized reading'  (duration: 266.604231ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T00:43:35.143405Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":851}
	{"level":"info","ts":"2024-08-16T00:43:35.154077Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":851,"took":"9.753016ms","hash":2822274117,"current-db-size-bytes":2539520,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2539520,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-08-16T00:43:35.154220Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2822274117,"revision":851,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T00:48:35.150314Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1093}
	{"level":"info","ts":"2024-08-16T00:48:35.154546Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1093,"took":"3.480043ms","hash":2700094522,"current-db-size-bytes":2539520,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1523712,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-16T00:48:35.154701Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2700094522,"revision":1093,"compact-revision":851}
	{"level":"info","ts":"2024-08-16T00:53:35.299182Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1336}
	{"level":"info","ts":"2024-08-16T00:53:35.313086Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1336,"took":"13.265343ms","hash":3915491719,"current-db-size-bytes":2539520,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-16T00:53:35.313197Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3915491719,"revision":1336,"compact-revision":1093}
	
	
	==> kernel <==
	 00:54:18 up 21 min,  0 users,  load average: 1.35, 0.68, 0.32
	Linux embed-certs-758469 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] <==
	I0816 00:49:37.408462       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:49:37.408503       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:51:37.409169       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 00:51:37.409174       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:51:37.409593       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0816 00:51:37.409592       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 00:51:37.410943       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:51:37.410998       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:53:36.410282       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:53:36.410408       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 00:53:37.412186       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:53:37.412300       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 00:53:37.412197       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:53:37.412356       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 00:53:37.413590       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:53:37.413623       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] <==
	E0816 00:49:12.121053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:49:12.640284       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:49:25.627536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-758469"
	E0816 00:49:42.128877       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:49:42.650129       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:49:42.656669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="218.028µs"
	I0816 00:49:56.646811       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="136.188µs"
	E0816 00:50:12.135052       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:50:12.657691       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:50:42.141193       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:50:42.667002       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:51:12.146882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:51:12.675106       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:51:42.153542       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:51:42.683877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:52:12.159541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:52:12.694334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:52:42.165638       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:52:42.703256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:53:12.172323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:53:12.712520       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:53:42.179165       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:53:42.722634       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:54:12.186218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:54:12.732529       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 00:33:37.223147       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 00:33:37.234730       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E0816 00:33:37.234867       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 00:33:37.270876       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 00:33:37.271016       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 00:33:37.271116       1 server_linux.go:169] "Using iptables Proxier"
	I0816 00:33:37.273803       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 00:33:37.274224       1 server.go:483] "Version info" version="v1.31.0"
	I0816 00:33:37.274255       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:33:37.276110       1 config.go:197] "Starting service config controller"
	I0816 00:33:37.276170       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 00:33:37.276193       1 config.go:104] "Starting endpoint slice config controller"
	I0816 00:33:37.276196       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 00:33:37.278319       1 config.go:326] "Starting node config controller"
	I0816 00:33:37.278433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 00:33:37.376833       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 00:33:37.377030       1 shared_informer.go:320] Caches are synced for service config
	I0816 00:33:37.379256       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] <==
	I0816 00:33:34.371337       1 serving.go:386] Generated self-signed cert in-memory
	W0816 00:33:36.342194       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 00:33:36.342285       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 00:33:36.342295       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 00:33:36.342356       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 00:33:36.414680       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 00:33:36.414943       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:33:36.417411       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 00:33:36.417618       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 00:33:36.417545       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 00:33:36.421995       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 00:33:36.522962       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 00:53:12 embed-certs-758469 kubelet[937]: E0816 00:53:12.634040     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:53:12 embed-certs-758469 kubelet[937]: E0816 00:53:12.891735     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769592891317848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:12 embed-certs-758469 kubelet[937]: E0816 00:53:12.891790     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769592891317848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:22 embed-certs-758469 kubelet[937]: E0816 00:53:22.893220     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769602892776258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:22 embed-certs-758469 kubelet[937]: E0816 00:53:22.893490     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769602892776258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:26 embed-certs-758469 kubelet[937]: E0816 00:53:26.631695     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:53:32 embed-certs-758469 kubelet[937]: E0816 00:53:32.650370     937 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 00:53:32 embed-certs-758469 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 00:53:32 embed-certs-758469 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 00:53:32 embed-certs-758469 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 00:53:32 embed-certs-758469 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 00:53:32 embed-certs-758469 kubelet[937]: E0816 00:53:32.895782     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769612895394907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:32 embed-certs-758469 kubelet[937]: E0816 00:53:32.895854     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769612895394907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:37 embed-certs-758469 kubelet[937]: E0816 00:53:37.632247     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:53:42 embed-certs-758469 kubelet[937]: E0816 00:53:42.898023     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769622897544499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:42 embed-certs-758469 kubelet[937]: E0816 00:53:42.898050     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769622897544499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:48 embed-certs-758469 kubelet[937]: E0816 00:53:48.631081     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:53:52 embed-certs-758469 kubelet[937]: E0816 00:53:52.899961     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769632899509434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:52 embed-certs-758469 kubelet[937]: E0816 00:53:52.900416     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769632899509434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:01 embed-certs-758469 kubelet[937]: E0816 00:54:01.631648     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:54:02 embed-certs-758469 kubelet[937]: E0816 00:54:02.901960     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769642901607062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:02 embed-certs-758469 kubelet[937]: E0816 00:54:02.902045     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769642901607062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:12 embed-certs-758469 kubelet[937]: E0816 00:54:12.632319     937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pnmsm" podUID="1fb83d03-46c2-4455-9455-e35c0a968ff1"
	Aug 16 00:54:12 embed-certs-758469 kubelet[937]: E0816 00:54:12.903487     937 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769652903202101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:12 embed-certs-758469 kubelet[937]: E0816 00:54:12.903532     937 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769652903202101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] <==
	I0816 00:34:07.988180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 00:34:08.002401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 00:34:08.002641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 00:34:25.422064       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 00:34:25.422685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-758469_eca3381c-6415-4fc5-9e7e-a8c2568ab38e!
	I0816 00:34:25.422327       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdee9e7c-b24b-41ee-a3da-288faf7470a2", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-758469_eca3381c-6415-4fc5-9e7e-a8c2568ab38e became leader
	I0816 00:34:25.526205       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-758469_eca3381c-6415-4fc5-9e7e-a8c2568ab38e!
	
	
	==> storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] <==
	I0816 00:33:37.151572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 00:34:07.154091       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-758469 -n embed-certs-758469
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-758469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-pnmsm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-758469 describe pod metrics-server-6867b74b74-pnmsm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-758469 describe pod metrics-server-6867b74b74-pnmsm: exit status 1 (60.700682ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-pnmsm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-758469 describe pod metrics-server-6867b74b74-pnmsm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (427.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (442.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 00:54:48.869448257 +0000 UTC m=+6563.450027225
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-616827 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-616827 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.041µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-616827 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-616827 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-616827 logs -n 25: (1.140191942s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:52 UTC | 16 Aug 24 00:53 UTC |
	| start   | -p newest-cni-504758 --memory=2200 --alsologtostderr   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-504758             | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-504758                                   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-504758                  | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-504758 --memory=2200 --alsologtostderr   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:54 UTC | 16 Aug 24 00:54 UTC |
	| delete  | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:54 UTC | 16 Aug 24 00:54 UTC |
	| image   | newest-cni-504758 image list                           | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:54 UTC | 16 Aug 24 00:54 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-504758                                   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:54 UTC | 16 Aug 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-504758                                   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:54 UTC | 16 Aug 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-504758                                   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:54 UTC | 16 Aug 24 00:54 UTC |
	| delete  | -p newest-cni-504758                                   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:54 UTC | 16 Aug 24 00:54 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:53:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:53:59.873204   85810 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:53:59.873418   85810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:53:59.873432   85810 out.go:358] Setting ErrFile to fd 2...
	I0816 00:53:59.873480   85810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:53:59.874000   85810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:53:59.875082   85810 out.go:352] Setting JSON to false
	I0816 00:53:59.876025   85810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9340,"bootTime":1723760300,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:53:59.876086   85810 start.go:139] virtualization: kvm guest
	I0816 00:53:59.877808   85810 out.go:177] * [newest-cni-504758] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:53:59.879412   85810 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:53:59.879465   85810 notify.go:220] Checking for updates...
	I0816 00:53:59.881689   85810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:53:59.882889   85810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:53:59.883990   85810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:53:59.885260   85810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:53:59.886614   85810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:53:59.888418   85810 config.go:182] Loaded profile config "newest-cni-504758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:53:59.889045   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.889129   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.903823   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
	I0816 00:53:59.904331   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:53:59.904868   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:53:59.904885   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:53:59.905186   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:53:59.905427   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:53:59.905682   85810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:53:59.906014   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.906063   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.920597   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0816 00:53:59.921028   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:53:59.921558   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:53:59.921585   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:53:59.921878   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:53:59.922101   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:53:59.960476   85810 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:53:59.961667   85810 start.go:297] selected driver: kvm2
	I0816 00:53:59.961693   85810 start.go:901] validating driver "kvm2" against &{Name:newest-cni-504758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-504758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:53:59.961862   85810 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:53:59.962612   85810 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:53:59.962691   85810 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:53:59.979820   85810 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:53:59.980325   85810 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 00:53:59.980411   85810 cni.go:84] Creating CNI manager for ""
	I0816 00:53:59.980428   85810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:53:59.980498   85810 start.go:340] cluster config:
	{Name:newest-cni-504758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-504758 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:53:59.980658   85810 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:53:59.982679   85810 out.go:177] * Starting "newest-cni-504758" primary control-plane node in "newest-cni-504758" cluster
	I0816 00:53:59.983985   85810 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:53:59.984028   85810 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:53:59.984040   85810 cache.go:56] Caching tarball of preloaded images
	I0816 00:53:59.984135   85810 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:53:59.984149   85810 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 00:53:59.984291   85810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/config.json ...
	I0816 00:53:59.984554   85810 start.go:360] acquireMachinesLock for newest-cni-504758: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:53:59.984616   85810 start.go:364] duration metric: took 33.654µs to acquireMachinesLock for "newest-cni-504758"
	I0816 00:53:59.984636   85810 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:53:59.984645   85810 fix.go:54] fixHost starting: 
	I0816 00:53:59.985031   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.985076   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.999635   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35639
	I0816 00:54:00.000153   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:00.000802   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:00.000828   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:00.001196   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:00.001416   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:00.001618   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetState
	I0816 00:54:00.003393   85810 fix.go:112] recreateIfNeeded on newest-cni-504758: state=Stopped err=<nil>
	I0816 00:54:00.003432   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	W0816 00:54:00.003592   85810 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:54:00.005439   85810 out.go:177] * Restarting existing kvm2 VM for "newest-cni-504758" ...
	I0816 00:54:00.006544   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Start
	I0816 00:54:00.006728   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring networks are active...
	I0816 00:54:00.007518   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring network default is active
	I0816 00:54:00.007916   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring network mk-newest-cni-504758 is active
	I0816 00:54:00.008361   85810 main.go:141] libmachine: (newest-cni-504758) Getting domain xml...
	I0816 00:54:00.009155   85810 main.go:141] libmachine: (newest-cni-504758) Creating domain...
	I0816 00:54:01.244893   85810 main.go:141] libmachine: (newest-cni-504758) Waiting to get IP...
	I0816 00:54:01.245724   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.246226   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.246290   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.246206   85845 retry.go:31] will retry after 269.680632ms: waiting for machine to come up
	I0816 00:54:01.517881   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.518403   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.518423   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.518361   85845 retry.go:31] will retry after 274.232355ms: waiting for machine to come up
	I0816 00:54:01.793786   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.794319   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.794348   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.794273   85845 retry.go:31] will retry after 416.170581ms: waiting for machine to come up
	I0816 00:54:02.212494   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:02.212959   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:02.213002   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:02.212904   85845 retry.go:31] will retry after 465.478219ms: waiting for machine to come up
	I0816 00:54:02.679458   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:02.679920   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:02.679955   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:02.679889   85845 retry.go:31] will retry after 748.437183ms: waiting for machine to come up
	I0816 00:54:03.429734   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:03.430251   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:03.430274   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:03.430199   85845 retry.go:31] will retry after 895.520052ms: waiting for machine to come up
	I0816 00:54:04.326808   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:04.327193   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:04.327219   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:04.327161   85845 retry.go:31] will retry after 754.604111ms: waiting for machine to come up
	I0816 00:54:05.083593   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:05.084040   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:05.084077   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:05.083992   85845 retry.go:31] will retry after 966.654738ms: waiting for machine to come up
	I0816 00:54:06.052390   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:06.052967   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:06.052994   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:06.052911   85845 retry.go:31] will retry after 1.600341812s: waiting for machine to come up
	I0816 00:54:07.655341   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:07.656004   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:07.656029   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:07.655958   85845 retry.go:31] will retry after 2.147103051s: waiting for machine to come up
	I0816 00:54:09.805659   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:09.806134   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:09.806167   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:09.806085   85845 retry.go:31] will retry after 2.057779929s: waiting for machine to come up
	I0816 00:54:11.866329   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:11.866766   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:11.866811   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:11.866746   85845 retry.go:31] will retry after 2.944636547s: waiting for machine to come up
	I0816 00:54:14.813343   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:14.813896   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:14.813924   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:14.813818   85845 retry.go:31] will retry after 4.129457164s: waiting for machine to come up
	I0816 00:54:18.946909   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:18.947356   85810 main.go:141] libmachine: (newest-cni-504758) Found IP for machine: 192.168.72.148
	I0816 00:54:18.947379   85810 main.go:141] libmachine: (newest-cni-504758) Reserving static IP address...
	I0816 00:54:18.947390   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has current primary IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:18.947790   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "newest-cni-504758", mac: "52:54:00:15:1d:34", ip: "192.168.72.148"} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:18.947813   85810 main.go:141] libmachine: (newest-cni-504758) DBG | skip adding static IP to network mk-newest-cni-504758 - found existing host DHCP lease matching {name: "newest-cni-504758", mac: "52:54:00:15:1d:34", ip: "192.168.72.148"}
	I0816 00:54:18.947825   85810 main.go:141] libmachine: (newest-cni-504758) Reserved static IP address: 192.168.72.148
	I0816 00:54:18.947834   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Getting to WaitForSSH function...
	I0816 00:54:18.947852   85810 main.go:141] libmachine: (newest-cni-504758) Waiting for SSH to be available...
	I0816 00:54:18.950059   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:18.950459   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:18.950486   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:18.950670   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Using SSH client type: external
	I0816 00:54:18.950689   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa (-rw-------)
	I0816 00:54:18.950745   85810 main.go:141] libmachine: (newest-cni-504758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:54:18.950761   85810 main.go:141] libmachine: (newest-cni-504758) DBG | About to run SSH command:
	I0816 00:54:18.950780   85810 main.go:141] libmachine: (newest-cni-504758) DBG | exit 0
	I0816 00:54:19.077955   85810 main.go:141] libmachine: (newest-cni-504758) DBG | SSH cmd err, output: <nil>: 
	I0816 00:54:19.078290   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetConfigRaw
	I0816 00:54:19.078870   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetIP
	I0816 00:54:19.081512   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.081958   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:19.082007   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.082236   85810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/config.json ...
	I0816 00:54:19.082503   85810 machine.go:93] provisionDockerMachine start ...
	I0816 00:54:19.082522   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:19.082746   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:19.084934   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.085244   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:19.085274   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.085418   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:19.085582   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:19.085707   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:19.085887   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:19.086036   85810 main.go:141] libmachine: Using SSH client type: native
	I0816 00:54:19.086259   85810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0816 00:54:19.086276   85810 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:54:19.198757   85810 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:54:19.198784   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetMachineName
	I0816 00:54:19.199019   85810 buildroot.go:166] provisioning hostname "newest-cni-504758"
	I0816 00:54:19.199042   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetMachineName
	I0816 00:54:19.199208   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:19.202175   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.202593   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:19.202631   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.202721   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:19.202924   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:19.203107   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:19.203280   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:19.203461   85810 main.go:141] libmachine: Using SSH client type: native
	I0816 00:54:19.203683   85810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0816 00:54:19.203704   85810 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-504758 && echo "newest-cni-504758" | sudo tee /etc/hostname
	I0816 00:54:19.336091   85810 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-504758
	
	I0816 00:54:19.336118   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:19.340588   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.341982   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:19.342011   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.342316   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:19.342533   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:19.342730   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:19.342933   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:19.343135   85810 main.go:141] libmachine: Using SSH client type: native
	I0816 00:54:19.343379   85810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0816 00:54:19.343408   85810 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-504758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-504758/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-504758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:54:19.465224   85810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:54:19.465254   85810 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:54:19.465276   85810 buildroot.go:174] setting up certificates
	I0816 00:54:19.465286   85810 provision.go:84] configureAuth start
	I0816 00:54:19.465295   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetMachineName
	I0816 00:54:19.465642   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetIP
	I0816 00:54:19.468539   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.468954   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:19.468978   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.469117   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:19.812124   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.812533   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:19.812559   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.812803   85810 provision.go:143] copyHostCerts
	I0816 00:54:19.812863   85810 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:54:19.812880   85810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:54:19.812947   85810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:54:19.813069   85810 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:54:19.813081   85810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:54:19.813118   85810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:54:19.813211   85810 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:54:19.813222   85810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:54:19.813250   85810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:54:19.813329   85810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.newest-cni-504758 san=[127.0.0.1 192.168.72.148 localhost minikube newest-cni-504758]
	I0816 00:54:19.977641   85810 provision.go:177] copyRemoteCerts
	I0816 00:54:19.977706   85810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:54:19.977736   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:19.981396   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.981742   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:19.981790   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:19.982004   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:19.982207   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:19.982372   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:19.982480   85810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa Username:docker}
	I0816 00:54:20.068094   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:54:20.093236   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:54:20.118133   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:54:20.142373   85810 provision.go:87] duration metric: took 677.06736ms to configureAuth
	I0816 00:54:20.142399   85810 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:54:20.142605   85810 config.go:182] Loaded profile config "newest-cni-504758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:54:20.142717   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:20.145720   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.146115   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:20.146143   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.146294   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:20.146507   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:20.146658   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:20.146802   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:20.146986   85810 main.go:141] libmachine: Using SSH client type: native
	I0816 00:54:20.147158   85810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0816 00:54:20.147173   85810 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:54:20.416964   85810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:54:20.416989   85810 machine.go:96] duration metric: took 1.334473099s to provisionDockerMachine
	I0816 00:54:20.417000   85810 start.go:293] postStartSetup for "newest-cni-504758" (driver="kvm2")
	I0816 00:54:20.417009   85810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:54:20.417022   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:20.417304   85810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:54:20.417323   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:20.419927   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.420322   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:20.420351   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.420482   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:20.420681   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:20.420853   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:20.421004   85810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa Username:docker}
	I0816 00:54:20.505240   85810 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:54:20.509985   85810 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:54:20.510016   85810 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:54:20.510089   85810 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:54:20.510181   85810 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:54:20.510304   85810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:54:20.520727   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:54:20.546368   85810 start.go:296] duration metric: took 129.356503ms for postStartSetup
	I0816 00:54:20.546404   85810 fix.go:56] duration metric: took 20.561758792s for fixHost
	I0816 00:54:20.546422   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:20.548826   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.549206   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:20.549232   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.549371   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:20.549703   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:20.549890   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:20.550033   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:20.550215   85810 main.go:141] libmachine: Using SSH client type: native
	I0816 00:54:20.550396   85810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.148 22 <nil> <nil>}
	I0816 00:54:20.550409   85810 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:54:20.658839   85810 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723769660.632558984
	
	I0816 00:54:20.658861   85810 fix.go:216] guest clock: 1723769660.632558984
	I0816 00:54:20.658871   85810 fix.go:229] Guest: 2024-08-16 00:54:20.632558984 +0000 UTC Remote: 2024-08-16 00:54:20.546407118 +0000 UTC m=+20.706697390 (delta=86.151866ms)
	I0816 00:54:20.658895   85810 fix.go:200] guest clock delta is within tolerance: 86.151866ms
	I0816 00:54:20.658902   85810 start.go:83] releasing machines lock for "newest-cni-504758", held for 20.674273808s
	I0816 00:54:20.658929   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:20.659189   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetIP
	I0816 00:54:20.661701   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.662126   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:20.662170   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.662317   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:20.662760   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:20.662954   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:20.663049   85810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:54:20.663095   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:20.663147   85810 ssh_runner.go:195] Run: cat /version.json
	I0816 00:54:20.663192   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:20.665720   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.665996   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.666084   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:20.666114   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.666240   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:20.666433   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:20.666445   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:20.666461   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:20.666572   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:20.666680   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:20.666746   85810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa Username:docker}
	I0816 00:54:20.666845   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:20.666991   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:20.667170   85810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa Username:docker}
	I0816 00:54:20.743262   85810 ssh_runner.go:195] Run: systemctl --version
	I0816 00:54:20.766273   85810 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:54:20.909309   85810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:54:20.915680   85810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:54:20.915744   85810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:54:20.932455   85810 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:54:20.932480   85810 start.go:495] detecting cgroup driver to use...
	I0816 00:54:20.932544   85810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:54:20.948447   85810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:54:20.963015   85810 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:54:20.963092   85810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:54:20.977223   85810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:54:20.991598   85810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:54:21.106568   85810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:54:21.275945   85810 docker.go:233] disabling docker service ...
	I0816 00:54:21.276019   85810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:54:21.298525   85810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:54:21.311324   85810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:54:21.426171   85810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:54:21.543379   85810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:54:21.558973   85810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:54:21.577733   85810 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:54:21.577805   85810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:54:21.588552   85810 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:54:21.588617   85810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:54:21.599341   85810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:54:21.610291   85810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:54:21.620995   85810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:54:21.632231   85810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:54:21.642882   85810 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:54:21.660759   85810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:54:21.671229   85810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:54:21.680803   85810 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:54:21.680861   85810 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:54:21.693960   85810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:54:21.703790   85810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:54:21.816034   85810 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:54:21.955075   85810 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:54:21.955150   85810 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:54:21.960551   85810 start.go:563] Will wait 60s for crictl version
	I0816 00:54:21.960604   85810 ssh_runner.go:195] Run: which crictl
	I0816 00:54:21.964614   85810 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:54:22.010163   85810 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:54:22.010263   85810 ssh_runner.go:195] Run: crio --version
	I0816 00:54:22.039516   85810 ssh_runner.go:195] Run: crio --version
	I0816 00:54:22.069950   85810 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:54:22.071282   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetIP
	I0816 00:54:22.073672   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:22.074091   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:22.074119   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:22.074347   85810 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:54:22.078606   85810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:54:22.092791   85810 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0816 00:54:22.093925   85810 kubeadm.go:883] updating cluster {Name:newest-cni-504758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-504758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:54:22.094043   85810 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:54:22.094101   85810 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:54:22.129994   85810 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:54:22.130053   85810 ssh_runner.go:195] Run: which lz4
	I0816 00:54:22.134156   85810 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:54:22.138324   85810 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:54:22.138358   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:54:23.501153   85810 crio.go:462] duration metric: took 1.367030753s to copy over tarball
	I0816 00:54:23.501225   85810 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:54:25.608679   85810 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.107431339s)
	I0816 00:54:25.608704   85810 crio.go:469] duration metric: took 2.107527564s to extract the tarball
	I0816 00:54:25.608711   85810 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:54:25.646398   85810 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:54:25.688688   85810 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:54:25.688710   85810 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:54:25.688718   85810 kubeadm.go:934] updating node { 192.168.72.148 8443 v1.31.0 crio true true} ...
	I0816 00:54:25.688816   85810 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-504758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-504758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:54:25.688905   85810 ssh_runner.go:195] Run: crio config
	I0816 00:54:25.732762   85810 cni.go:84] Creating CNI manager for ""
	I0816 00:54:25.732787   85810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:54:25.732802   85810 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0816 00:54:25.732828   85810 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.148 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-504758 NodeName:newest-cni-504758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:54:25.733013   85810 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-504758"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:54:25.733106   85810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:54:25.743796   85810 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:54:25.743864   85810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:54:25.753931   85810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0816 00:54:25.770813   85810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:54:25.787379   85810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0816 00:54:25.804390   85810 ssh_runner.go:195] Run: grep 192.168.72.148	control-plane.minikube.internal$ /etc/hosts
	I0816 00:54:25.808345   85810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:54:25.821129   85810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:54:25.934774   85810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:54:25.951768   85810 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758 for IP: 192.168.72.148
	I0816 00:54:25.951790   85810 certs.go:194] generating shared ca certs ...
	I0816 00:54:25.951810   85810 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:54:25.951944   85810 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:54:25.952012   85810 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:54:25.952026   85810 certs.go:256] generating profile certs ...
	I0816 00:54:25.952119   85810 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/client.key
	I0816 00:54:25.952196   85810 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/apiserver.key.26c1a586
	I0816 00:54:25.952252   85810 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/proxy-client.key
	I0816 00:54:25.952388   85810 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:54:25.952434   85810 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:54:25.952448   85810 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:54:25.952490   85810 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:54:25.952527   85810 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:54:25.952557   85810 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:54:25.952610   85810 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:54:25.953915   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:54:25.997655   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:54:26.039888   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:54:26.081048   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:54:26.109920   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 00:54:26.138565   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:54:26.162193   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:54:26.186176   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:54:26.211145   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:54:26.235255   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:54:26.262574   85810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:54:26.286494   85810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:54:26.303052   85810 ssh_runner.go:195] Run: openssl version
	I0816 00:54:26.308886   85810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:54:26.320564   85810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:54:26.325281   85810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:54:26.325337   85810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:54:26.331510   85810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:54:26.343077   85810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:54:26.354606   85810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:54:26.359306   85810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:54:26.359360   85810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:54:26.365186   85810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:54:26.376521   85810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:54:26.388119   85810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:54:26.392937   85810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:54:26.392995   85810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:54:26.399182   85810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:54:26.410412   85810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:54:26.415384   85810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:54:26.421201   85810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:54:26.427078   85810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:54:26.433101   85810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:54:26.438839   85810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:54:26.445058   85810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:54:26.451045   85810 kubeadm.go:392] StartCluster: {Name:newest-cni-504758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-504758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:54:26.451169   85810 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:54:26.451218   85810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:54:26.494229   85810 cri.go:89] found id: ""
	I0816 00:54:26.494304   85810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:54:26.505328   85810 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:54:26.505351   85810 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:54:26.505392   85810 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:54:26.516006   85810 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:54:26.516792   85810 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-504758" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:54:26.517030   85810 kubeconfig.go:62] /home/jenkins/minikube-integration/19452-12919/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-504758" cluster setting kubeconfig missing "newest-cni-504758" context setting]
	I0816 00:54:26.517497   85810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:54:26.519037   85810 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:54:26.528994   85810 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.148
	I0816 00:54:26.529025   85810 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:54:26.529038   85810 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:54:26.529085   85810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:54:26.573944   85810 cri.go:89] found id: ""
	I0816 00:54:26.574013   85810 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:54:26.591774   85810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:54:26.602116   85810 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:54:26.602146   85810 kubeadm.go:157] found existing configuration files:
	
	I0816 00:54:26.602201   85810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:54:26.612010   85810 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:54:26.612076   85810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:54:26.622406   85810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:54:26.631806   85810 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:54:26.631864   85810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:54:26.642284   85810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:54:26.651728   85810 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:54:26.651788   85810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:54:26.661516   85810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:54:26.671116   85810 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:54:26.671173   85810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:54:26.681084   85810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:54:26.690953   85810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:54:26.798941   85810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:54:27.688815   85810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:54:27.909656   85810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:54:27.982005   85810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:54:28.069100   85810 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:54:28.069187   85810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:54:28.569301   85810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:54:29.069870   85810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:54:29.569277   85810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:54:30.069679   85810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:54:30.083819   85810 api_server.go:72] duration metric: took 2.014730291s to wait for apiserver process to appear ...
	I0816 00:54:30.083851   85810 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:54:30.083873   85810 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8443/healthz ...
	I0816 00:54:32.793814   85810 api_server.go:279] https://192.168.72.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:54:32.793869   85810 api_server.go:103] status: https://192.168.72.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:54:32.793885   85810 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8443/healthz ...
	I0816 00:54:32.833150   85810 api_server.go:279] https://192.168.72.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:54:32.833178   85810 api_server.go:103] status: https://192.168.72.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:54:33.084629   85810 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8443/healthz ...
	I0816 00:54:33.089023   85810 api_server.go:279] https://192.168.72.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:54:33.089056   85810 api_server.go:103] status: https://192.168.72.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:54:33.584773   85810 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8443/healthz ...
	I0816 00:54:33.592060   85810 api_server.go:279] https://192.168.72.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:54:33.592086   85810 api_server.go:103] status: https://192.168.72.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:54:34.084784   85810 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8443/healthz ...
	I0816 00:54:34.089139   85810 api_server.go:279] https://192.168.72.148:8443/healthz returned 200:
	ok
	I0816 00:54:34.095409   85810 api_server.go:141] control plane version: v1.31.0
	I0816 00:54:34.095435   85810 api_server.go:131] duration metric: took 4.011578023s to wait for apiserver health ...
	I0816 00:54:34.095445   85810 cni.go:84] Creating CNI manager for ""
	I0816 00:54:34.095451   85810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:54:34.097313   85810 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:54:34.098622   85810 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:54:34.110223   85810 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:54:34.156522   85810 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:54:34.175134   85810 system_pods.go:59] 8 kube-system pods found
	I0816 00:54:34.175163   85810 system_pods.go:61] "coredns-6f6b679f8f-xfb9r" [e0019fd6-72ab-4ad0-ad25-75bd44201235] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:54:34.175172   85810 system_pods.go:61] "etcd-newest-cni-504758" [c1a5fa50-5fe4-4232-a85a-d8a3317eb776] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:54:34.175180   85810 system_pods.go:61] "kube-apiserver-newest-cni-504758" [31b004d6-f79a-4dad-adb2-8d02266d37fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:54:34.175186   85810 system_pods.go:61] "kube-controller-manager-newest-cni-504758" [1bd1f7ff-9d03-423f-88d5-4402e5321dfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:54:34.175192   85810 system_pods.go:61] "kube-proxy-pn4wk" [4a1a7923-96a8-4212-bbbb-8257d3f355c5] Running
	I0816 00:54:34.175197   85810 system_pods.go:61] "kube-scheduler-newest-cni-504758" [c8aaf34a-189a-479e-ae82-2eed2d6cbb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:54:34.175202   85810 system_pods.go:61] "metrics-server-6867b74b74-ls8qn" [941399ed-f39b-46c4-8544-ce073fec3a88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:54:34.175206   85810 system_pods.go:61] "storage-provisioner" [a47e172d-4858-4a48-a72d-0dbd3fdff698] Running
	I0816 00:54:34.175211   85810 system_pods.go:74] duration metric: took 18.670637ms to wait for pod list to return data ...
	I0816 00:54:34.175219   85810 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:54:34.179492   85810 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:54:34.179523   85810 node_conditions.go:123] node cpu capacity is 2
	I0816 00:54:34.179533   85810 node_conditions.go:105] duration metric: took 4.309359ms to run NodePressure ...
	I0816 00:54:34.179548   85810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:54:34.446420   85810 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:54:34.458365   85810 ops.go:34] apiserver oom_adj: -16
	I0816 00:54:34.458386   85810 kubeadm.go:597] duration metric: took 7.953029022s to restartPrimaryControlPlane
	I0816 00:54:34.458395   85810 kubeadm.go:394] duration metric: took 8.007359019s to StartCluster
	I0816 00:54:34.458411   85810 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:54:34.458480   85810 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:54:34.459208   85810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:54:34.459410   85810 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:54:34.459478   85810 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:54:34.459566   85810 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-504758"
	I0816 00:54:34.459570   85810 addons.go:69] Setting dashboard=true in profile "newest-cni-504758"
	I0816 00:54:34.459601   85810 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-504758"
	W0816 00:54:34.459609   85810 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:54:34.459619   85810 addons.go:234] Setting addon dashboard=true in "newest-cni-504758"
	W0816 00:54:34.459640   85810 addons.go:243] addon dashboard should already be in state true
	I0816 00:54:34.459644   85810 host.go:66] Checking if "newest-cni-504758" exists ...
	I0816 00:54:34.459635   85810 addons.go:69] Setting metrics-server=true in profile "newest-cni-504758"
	I0816 00:54:34.459651   85810 config.go:182] Loaded profile config "newest-cni-504758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:54:34.459672   85810 addons.go:234] Setting addon metrics-server=true in "newest-cni-504758"
	W0816 00:54:34.459685   85810 addons.go:243] addon metrics-server should already be in state true
	I0816 00:54:34.459724   85810 host.go:66] Checking if "newest-cni-504758" exists ...
	I0816 00:54:34.459675   85810 host.go:66] Checking if "newest-cni-504758" exists ...
	I0816 00:54:34.459610   85810 addons.go:69] Setting default-storageclass=true in profile "newest-cni-504758"
	I0816 00:54:34.459831   85810 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-504758"
	I0816 00:54:34.460051   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.460092   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.460125   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.460133   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.460145   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.460174   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.460229   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.460278   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.462065   85810 out.go:177] * Verifying Kubernetes components...
	I0816 00:54:34.463592   85810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:54:34.475676   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0816 00:54:34.475769   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42183
	I0816 00:54:34.475894   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0816 00:54:34.476190   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.476243   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.476194   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.476715   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.476735   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.476836   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.476848   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.476856   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.476865   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.477079   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.477186   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.477202   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.477631   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.477657   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.477746   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.477784   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.478232   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.478261   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.478327   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0816 00:54:34.478669   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.479111   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.479137   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.479415   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.479611   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetState
	I0816 00:54:34.484594   85810 addons.go:234] Setting addon default-storageclass=true in "newest-cni-504758"
	W0816 00:54:34.484617   85810 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:54:34.484644   85810 host.go:66] Checking if "newest-cni-504758" exists ...
	I0816 00:54:34.485002   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.485047   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.496544   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0816 00:54:34.497033   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.497481   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.497500   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.497572   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44559
	I0816 00:54:34.497867   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.498042   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetState
	I0816 00:54:34.498104   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.498550   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.498564   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.499128   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.499316   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetState
	I0816 00:54:34.500019   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:34.500814   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:34.502454   85810 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:54:34.502496   85810 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:54:34.503788   85810 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:54:34.503805   85810 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:54:34.503820   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:34.503963   85810 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:54:34.503979   85810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:54:34.503995   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:34.507103   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0816 00:54:34.507544   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:34.507873   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.508574   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:34.508649   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:34.508664   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:34.508684   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:34.508700   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:34.508715   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:34.508792   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:34.508874   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.508885   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.508938   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:34.509169   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:34.509218   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:34.509330   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.509391   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:34.509486   85810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa Username:docker}
	I0816 00:54:34.509536   85810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa Username:docker}
	I0816 00:54:34.510311   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:54:34.510340   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:54:34.516308   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39061
	I0816 00:54:34.516742   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.517346   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.517360   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.517694   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.517935   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetState
	I0816 00:54:34.519502   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:34.521621   85810 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0816 00:54:34.523252   85810 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0816 00:54:34.524789   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 00:54:34.524805   85810 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 00:54:34.524822   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:34.527610   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:34.527823   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0816 00:54:34.528048   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:34.528077   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:34.528269   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:34.528333   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:34.528547   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:34.528748   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:34.528766   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:34.528782   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:34.528925   85810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa Username:docker}
	I0816 00:54:34.529171   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:34.529427   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetState
	I0816 00:54:34.530847   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:34.531150   85810 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:54:34.531164   85810 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:54:34.531176   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHHostname
	I0816 00:54:34.533651   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:34.534014   85810 main.go:141] libmachine: (newest-cni-504758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:1d:34", ip: ""} in network mk-newest-cni-504758: {Iface:virbr3 ExpiryTime:2024-08-16 01:54:11 +0000 UTC Type:0 Mac:52:54:00:15:1d:34 Iaid: IPaddr:192.168.72.148 Prefix:24 Hostname:newest-cni-504758 Clientid:01:52:54:00:15:1d:34}
	I0816 00:54:34.534039   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined IP address 192.168.72.148 and MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:34.534212   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHPort
	I0816 00:54:34.534392   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHKeyPath
	I0816 00:54:34.534588   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetSSHUsername
	I0816 00:54:34.534737   85810 sshutil.go:53] new ssh client: &{IP:192.168.72.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/newest-cni-504758/id_rsa Username:docker}
	I0816 00:54:34.649648   85810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:54:34.666310   85810 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:54:34.666395   85810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:54:34.680387   85810 api_server.go:72] duration metric: took 220.941941ms to wait for apiserver process to appear ...
	I0816 00:54:34.680419   85810 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:54:34.680438   85810 api_server.go:253] Checking apiserver healthz at https://192.168.72.148:8443/healthz ...
	I0816 00:54:34.684559   85810 api_server.go:279] https://192.168.72.148:8443/healthz returned 200:
	ok
	I0816 00:54:34.685605   85810 api_server.go:141] control plane version: v1.31.0
	I0816 00:54:34.685624   85810 api_server.go:131] duration metric: took 5.197297ms to wait for apiserver health ...
	I0816 00:54:34.685631   85810 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:54:34.691732   85810 system_pods.go:59] 8 kube-system pods found
	I0816 00:54:34.691759   85810 system_pods.go:61] "coredns-6f6b679f8f-xfb9r" [e0019fd6-72ab-4ad0-ad25-75bd44201235] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:54:34.691766   85810 system_pods.go:61] "etcd-newest-cni-504758" [c1a5fa50-5fe4-4232-a85a-d8a3317eb776] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:54:34.691774   85810 system_pods.go:61] "kube-apiserver-newest-cni-504758" [31b004d6-f79a-4dad-adb2-8d02266d37fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:54:34.691782   85810 system_pods.go:61] "kube-controller-manager-newest-cni-504758" [1bd1f7ff-9d03-423f-88d5-4402e5321dfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:54:34.691786   85810 system_pods.go:61] "kube-proxy-pn4wk" [4a1a7923-96a8-4212-bbbb-8257d3f355c5] Running
	I0816 00:54:34.691792   85810 system_pods.go:61] "kube-scheduler-newest-cni-504758" [c8aaf34a-189a-479e-ae82-2eed2d6cbb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:54:34.691797   85810 system_pods.go:61] "metrics-server-6867b74b74-ls8qn" [941399ed-f39b-46c4-8544-ce073fec3a88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:54:34.691801   85810 system_pods.go:61] "storage-provisioner" [a47e172d-4858-4a48-a72d-0dbd3fdff698] Running
	I0816 00:54:34.691807   85810 system_pods.go:74] duration metric: took 6.170563ms to wait for pod list to return data ...
	I0816 00:54:34.691817   85810 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:54:34.695759   85810 default_sa.go:45] found service account: "default"
	I0816 00:54:34.695782   85810 default_sa.go:55] duration metric: took 3.959639ms for default service account to be created ...
	I0816 00:54:34.695792   85810 kubeadm.go:582] duration metric: took 236.352141ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 00:54:34.695805   85810 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:54:34.698681   85810 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:54:34.698701   85810 node_conditions.go:123] node cpu capacity is 2
	I0816 00:54:34.698711   85810 node_conditions.go:105] duration metric: took 2.901959ms to run NodePressure ...
	I0816 00:54:34.698721   85810 start.go:241] waiting for startup goroutines ...
	I0816 00:54:34.751182   85810 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:54:34.751202   85810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:54:34.774698   85810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:54:34.776494   85810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:54:34.784129   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 00:54:34.784155   85810 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 00:54:34.805373   85810 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:54:34.805396   85810 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:54:34.852771   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 00:54:34.852801   85810 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 00:54:34.866123   85810 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:54:34.866149   85810 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:54:34.880041   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 00:54:34.880065   85810 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 00:54:34.908137   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 00:54:34.908165   85810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0816 00:54:34.979182   85810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:54:34.999050   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 00:54:34.999077   85810 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 00:54:35.084776   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 00:54:35.084809   85810 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 00:54:35.124596   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 00:54:35.124625   85810 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 00:54:35.208117   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 00:54:35.208140   85810 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 00:54:35.260318   85810 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 00:54:35.260346   85810 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 00:54:35.316859   85810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 00:54:35.330579   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:35.330607   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:35.330943   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:35.330945   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:35.330967   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:35.330977   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:35.330985   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:35.331241   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:35.331271   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:35.331278   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:35.337647   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:35.337664   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:35.337915   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:35.337966   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:35.337984   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:36.413585   85810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.637061583s)
	I0816 00:54:36.413633   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:36.413653   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:36.414069   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:36.414088   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:36.414087   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:36.414098   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:36.414124   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:36.414384   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:36.414396   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:36.414419   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:36.462037   85810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.482807381s)
	I0816 00:54:36.462085   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:36.462100   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:36.462479   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:36.462499   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:36.462508   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:36.462517   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:36.462533   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:36.462748   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:36.462795   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:36.462806   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:36.462819   85810 addons.go:475] Verifying addon metrics-server=true in "newest-cni-504758"
	I0816 00:54:36.776691   85810 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.459780467s)
	I0816 00:54:36.776753   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:36.776769   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:36.777064   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:36.777108   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:36.777126   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:36.777141   85810 main.go:141] libmachine: Making call to close driver server
	I0816 00:54:36.777149   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Close
	I0816 00:54:36.777350   85810 main.go:141] libmachine: (newest-cni-504758) DBG | Closing plugin on server side
	I0816 00:54:36.777374   85810 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:54:36.777392   85810 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:54:36.779091   85810 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-504758 addons enable metrics-server
	
	I0816 00:54:36.780670   85810 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 00:54:36.782264   85810 addons.go:510] duration metric: took 2.322786639s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0816 00:54:36.782302   85810 start.go:246] waiting for cluster config update ...
	I0816 00:54:36.782318   85810 start.go:255] writing updated cluster config ...
	I0816 00:54:36.782615   85810 ssh_runner.go:195] Run: rm -f paused
	I0816 00:54:36.830377   85810 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:54:36.832254   85810 out.go:177] * Done! kubectl is now configured to use "newest-cni-504758" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.494622897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769689494597633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2903b57-06d7-4553-88f3-f8ba15b5c16e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.495130492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5fb56df-7890-478d-b1bf-a2979f8dc079 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.495189196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5fb56df-7890-478d-b1bf-a2979f8dc079 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.495383624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768468688541660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f014e6cda883e1be849366a2984c3f9f80db9a87d96485de121db9c754b4dac7,PodSandboxId:69b55dbd9e253a720509e9a771d0c2fcc2f04a040953538851d503ffd85121e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768446956220991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44031c7f-e317-4703-aab3-50572aae00c2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c,PodSandboxId:6a331d270c6f2e515692365fdf220ed7c2bd679ea0a7e9235f6a77988827201c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768445697494669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4n9qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611de0e-5480-4841-bfb5-68050fa068aa,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8,PodSandboxId:306632430a90e1623825395b3f2e25a8ada85715621156079531fcd81637da13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768437945514172,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8f9913-5
496-4fda-800e-c942e714f13e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768437889693500,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-
c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87,PodSandboxId:7c2e6768a141badbb09ec9f4e6a4923bf3120cd0def717d5b018008ffa5d64ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768433147183991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656000523d0c38f28776f138cadf7775,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60,PodSandboxId:92ed2606bf7babe56c413aaa4a3ebaca03052e6f0c12c046cbff2d1a11814de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768433119357113,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f07b65b5b4891ed9946624fdc67020,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46,PodSandboxId:81667bd6b6c80b2d134d3735979e4059be1a5c6b0671b2cb1665a5dc21af860c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768433093688159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1376704204a85444fb745b41bd56a466,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86,PodSandboxId:c52e756e9d40c87a3a35388b00547a911f122aa5a17fd6456f28ecc6c19441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768433126966873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e61dbeec6c5826180b0c3cc193efb
0,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5fb56df-7890-478d-b1bf-a2979f8dc079 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.532935051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c85ab7e-730a-41fb-aada-d0bd42db07b2 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.533020374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c85ab7e-730a-41fb-aada-d0bd42db07b2 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.534381082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=808d69d7-8b6c-4091-bf38-4c1e5152bde7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.534792002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769689534752828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=808d69d7-8b6c-4091-bf38-4c1e5152bde7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.535498688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01bd116b-fa26-4dfc-8e59-72163fd8b9b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.535657486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01bd116b-fa26-4dfc-8e59-72163fd8b9b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.535922733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768468688541660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f014e6cda883e1be849366a2984c3f9f80db9a87d96485de121db9c754b4dac7,PodSandboxId:69b55dbd9e253a720509e9a771d0c2fcc2f04a040953538851d503ffd85121e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768446956220991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44031c7f-e317-4703-aab3-50572aae00c2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c,PodSandboxId:6a331d270c6f2e515692365fdf220ed7c2bd679ea0a7e9235f6a77988827201c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768445697494669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4n9qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611de0e-5480-4841-bfb5-68050fa068aa,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8,PodSandboxId:306632430a90e1623825395b3f2e25a8ada85715621156079531fcd81637da13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768437945514172,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8f9913-5
496-4fda-800e-c942e714f13e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768437889693500,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-
c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87,PodSandboxId:7c2e6768a141badbb09ec9f4e6a4923bf3120cd0def717d5b018008ffa5d64ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768433147183991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656000523d0c38f28776f138cadf7775,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60,PodSandboxId:92ed2606bf7babe56c413aaa4a3ebaca03052e6f0c12c046cbff2d1a11814de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768433119357113,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f07b65b5b4891ed9946624fdc67020,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46,PodSandboxId:81667bd6b6c80b2d134d3735979e4059be1a5c6b0671b2cb1665a5dc21af860c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768433093688159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1376704204a85444fb745b41bd56a466,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86,PodSandboxId:c52e756e9d40c87a3a35388b00547a911f122aa5a17fd6456f28ecc6c19441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768433126966873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e61dbeec6c5826180b0c3cc193efb
0,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01bd116b-fa26-4dfc-8e59-72163fd8b9b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.580270075Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a77764cd-fcd3-4bae-bfc3-7ff20c3d43a4 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.580361978Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a77764cd-fcd3-4bae-bfc3-7ff20c3d43a4 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.581452256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=111c669f-d268-4980-8f93-285bba47d0f7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.581842388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769689581822032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=111c669f-d268-4980-8f93-285bba47d0f7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.582458942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d4aef56-6f23-4148-b979-ff4b710949a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.582509169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d4aef56-6f23-4148-b979-ff4b710949a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.582695606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768468688541660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f014e6cda883e1be849366a2984c3f9f80db9a87d96485de121db9c754b4dac7,PodSandboxId:69b55dbd9e253a720509e9a771d0c2fcc2f04a040953538851d503ffd85121e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768446956220991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44031c7f-e317-4703-aab3-50572aae00c2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c,PodSandboxId:6a331d270c6f2e515692365fdf220ed7c2bd679ea0a7e9235f6a77988827201c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768445697494669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4n9qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611de0e-5480-4841-bfb5-68050fa068aa,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8,PodSandboxId:306632430a90e1623825395b3f2e25a8ada85715621156079531fcd81637da13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768437945514172,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8f9913-5
496-4fda-800e-c942e714f13e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768437889693500,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-
c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87,PodSandboxId:7c2e6768a141badbb09ec9f4e6a4923bf3120cd0def717d5b018008ffa5d64ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768433147183991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656000523d0c38f28776f138cadf7775,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60,PodSandboxId:92ed2606bf7babe56c413aaa4a3ebaca03052e6f0c12c046cbff2d1a11814de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768433119357113,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f07b65b5b4891ed9946624fdc67020,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46,PodSandboxId:81667bd6b6c80b2d134d3735979e4059be1a5c6b0671b2cb1665a5dc21af860c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768433093688159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1376704204a85444fb745b41bd56a466,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86,PodSandboxId:c52e756e9d40c87a3a35388b00547a911f122aa5a17fd6456f28ecc6c19441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768433126966873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e61dbeec6c5826180b0c3cc193efb
0,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d4aef56-6f23-4148-b979-ff4b710949a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.616489375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76e876ed-1fc3-4ae5-9467-1e4826b38680 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.616562143Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76e876ed-1fc3-4ae5-9467-1e4826b38680 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.617492037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a4e8b52-94ed-4a37-80cf-736cbd30f1d7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.617871985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769689617851128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a4e8b52-94ed-4a37-80cf-736cbd30f1d7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.618665644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60a6455f-7615-4afc-9852-c03f6bb609a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.618719478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60a6455f-7615-4afc-9852-c03f6bb609a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:49 default-k8s-diff-port-616827 crio[725]: time="2024-08-16 00:54:49.618906403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768468688541660,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f014e6cda883e1be849366a2984c3f9f80db9a87d96485de121db9c754b4dac7,PodSandboxId:69b55dbd9e253a720509e9a771d0c2fcc2f04a040953538851d503ffd85121e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723768446956220991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44031c7f-e317-4703-aab3-50572aae00c2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c,PodSandboxId:6a331d270c6f2e515692365fdf220ed7c2bd679ea0a7e9235f6a77988827201c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768445697494669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4n9qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611de0e-5480-4841-bfb5-68050fa068aa,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8,PodSandboxId:306632430a90e1623825395b3f2e25a8ada85715621156079531fcd81637da13,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723768437945514172,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99ds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8f9913-5
496-4fda-800e-c942e714f13e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae,PodSandboxId:8bcfb5671215928929e2387cc73cd57c239a11ea480dd6e86ef289517d07dbd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723768437889693500,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa790373-a4ce-4e37-ba86-
c1b0ae1074ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87,PodSandboxId:7c2e6768a141badbb09ec9f4e6a4923bf3120cd0def717d5b018008ffa5d64ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768433147183991,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 656000523d0c38f28776f138cadf7775,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60,PodSandboxId:92ed2606bf7babe56c413aaa4a3ebaca03052e6f0c12c046cbff2d1a11814de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768433119357113,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f07b65b5b4891ed9946624fdc67020,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46,PodSandboxId:81667bd6b6c80b2d134d3735979e4059be1a5c6b0671b2cb1665a5dc21af860c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768433093688159,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1376704204a85444fb745b41bd56a466,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86,PodSandboxId:c52e756e9d40c87a3a35388b00547a911f122aa5a17fd6456f28ecc6c19441b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768433126966873,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-616827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e61dbeec6c5826180b0c3cc193efb
0,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60a6455f-7615-4afc-9852-c03f6bb609a1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	31400c13619c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   8bcfb56712159       storage-provisioner
	f014e6cda883e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   69b55dbd9e253       busybox
	15fd3e395581c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   6a331d270c6f2       coredns-6f6b679f8f-4n9qq
	9821dfda7cc43       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      20 minutes ago      Running             kube-proxy                1                   306632430a90e       kube-proxy-f99ds
	d624b2f88ce3e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   8bcfb56712159       storage-provisioner
	d6e8ce8b4b577       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   7c2e6768a141b       etcd-default-k8s-diff-port-616827
	84380e27c5a9d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      20 minutes ago      Running             kube-controller-manager   1                   c52e756e9d40c       kube-controller-manager-default-k8s-diff-port-616827
	eb4c36b11d03e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      20 minutes ago      Running             kube-scheduler            1                   92ed2606bf7ba       kube-scheduler-default-k8s-diff-port-616827
	169a7e51493aa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      20 minutes ago      Running             kube-apiserver            1                   81667bd6b6c80       kube-apiserver-default-k8s-diff-port-616827
	
	
	==> coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47115 - 59184 "HINFO IN 7431896370060291427.1225471116469602556. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0115711s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-616827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-616827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=default-k8s-diff-port-616827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T00_25_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 00:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-616827
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:54:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 00:49:46 +0000   Fri, 16 Aug 2024 00:25:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 00:49:46 +0000   Fri, 16 Aug 2024 00:25:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 00:49:46 +0000   Fri, 16 Aug 2024 00:25:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 00:49:46 +0000   Fri, 16 Aug 2024 00:34:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.128
	  Hostname:    default-k8s-diff-port-616827
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f95a7a8d850e42cfb3645ab68eaceaa1
	  System UUID:                f95a7a8d-850e-42cf-b364-5ab68eaceaa1
	  Boot ID:                    86338a5c-695d-45f2-a39b-7f70b63f7a54
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-4n9qq                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-616827                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-616827             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-616827    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-f99ds                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-616827             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-sxqkg                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-616827 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-616827 event: Registered Node default-k8s-diff-port-616827 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-616827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-616827 event: Registered Node default-k8s-diff-port-616827 in Controller
	
	
	==> dmesg <==
	[Aug16 00:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053524] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040087] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.943170] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.531616] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609508] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.348477] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.070912] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073416] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.203791] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.167481] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.338843] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +4.329915] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.069062] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.239626] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +5.619659] kauditd_printk_skb: 97 callbacks suppressed
	[Aug16 00:34] systemd-fstab-generator[1562]: Ignoring "noauto" option for root device
	[  +4.173727] kauditd_printk_skb: 64 callbacks suppressed
	[ +24.235315] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] <==
	{"level":"info","ts":"2024-08-16T00:33:55.444428Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.128:2379"}
	{"level":"info","ts":"2024-08-16T00:33:55.444430Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T00:34:10.896305Z","caller":"traceutil/trace.go:171","msg":"trace[1004334470] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"128.08971ms","start":"2024-08-16T00:34:10.768203Z","end":"2024-08-16T00:34:10.896293Z","steps":["trace[1004334470] 'process raft request'  (duration: 127.971219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T00:34:11.333158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.72327ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261891839865383082 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" mod_revision:583 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" value_size:6543 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-16T00:34:11.333393Z","caller":"traceutil/trace.go:171","msg":"trace[1499638222] linearizableReadLoop","detail":"{readStateIndex:620; appliedIndex:619; }","duration":"343.931988ms","start":"2024-08-16T00:34:10.989447Z","end":"2024-08-16T00:34:11.333379Z","steps":["trace[1499638222] 'read index received'  (duration: 178.284911ms)","trace[1499638222] 'applied index is now lower than readState.Index'  (duration: 165.645474ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T00:34:11.333508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.050672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" ","response":"range_response_count:1 size:6645"}
	{"level":"info","ts":"2024-08-16T00:34:11.333538Z","caller":"traceutil/trace.go:171","msg":"trace[1006926640] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"411.636679ms","start":"2024-08-16T00:34:10.921888Z","end":"2024-08-16T00:34:11.333524Z","steps":["trace[1006926640] 'process raft request'  (duration: 245.892663ms)","trace[1006926640] 'compare'  (duration: 164.487514ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-16T00:34:11.333625Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T00:34:10.921867Z","time spent":"411.720179ms","remote":"127.0.0.1:50300","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6630,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" mod_revision:583 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" value_size:6543 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" > >"}
	{"level":"info","ts":"2024-08-16T00:34:11.333567Z","caller":"traceutil/trace.go:171","msg":"trace[1089771992] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827; range_end:; response_count:1; response_revision:584; }","duration":"344.115225ms","start":"2024-08-16T00:34:10.989443Z","end":"2024-08-16T00:34:11.333558Z","steps":["trace[1089771992] 'agreement among raft nodes before linearized reading'  (duration: 344.010429ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T00:34:11.333749Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T00:34:10.989401Z","time spent":"344.335576ms","remote":"127.0.0.1:50300","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":6669,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-616827\" "}
	{"level":"warn","ts":"2024-08-16T00:34:11.853655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.370284ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261891839865383094 > lease_revoke:<id:2d4491589352f5fe>","response":"size:29"}
	{"level":"info","ts":"2024-08-16T00:43:55.473133Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":827}
	{"level":"info","ts":"2024-08-16T00:43:55.483808Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":827,"took":"10.210583ms","hash":560694949,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2740224,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-16T00:43:55.483901Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":560694949,"revision":827,"compact-revision":-1}
	{"level":"info","ts":"2024-08-16T00:48:55.483466Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1069}
	{"level":"info","ts":"2024-08-16T00:48:55.489982Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1069,"took":"5.21159ms","hash":1529552443,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-16T00:48:55.490053Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1529552443,"revision":1069,"compact-revision":827}
	{"level":"warn","ts":"2024-08-16T00:53:33.966166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.202763ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-16T00:53:33.966878Z","caller":"traceutil/trace.go:171","msg":"trace[84420116] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1538; }","duration":"116.038656ms","start":"2024-08-16T00:53:33.850814Z","end":"2024-08-16T00:53:33.966853Z","steps":["trace[84420116] 'range keys from in-memory index tree'  (duration: 115.184358ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-16T00:53:55.494244Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1312}
	{"level":"info","ts":"2024-08-16T00:53:55.498501Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1312,"took":"3.598066ms","hash":448273090,"current-db-size-bytes":2740224,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-16T00:53:55.498591Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":448273090,"revision":1312,"compact-revision":1069}
	{"level":"info","ts":"2024-08-16T00:54:28.255628Z","caller":"traceutil/trace.go:171","msg":"trace[2086932318] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"146.745395ms","start":"2024-08-16T00:54:28.108856Z","end":"2024-08-16T00:54:28.255601Z","steps":["trace[2086932318] 'process raft request'  (duration: 146.430093ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T00:54:28.541708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.366349ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261891839865391127 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:2d4491589aec0816>","response":"size:41"}
	{"level":"info","ts":"2024-08-16T00:54:28.661330Z","caller":"traceutil/trace.go:171","msg":"trace[1224084291] transaction","detail":"{read_only:false; response_revision:1583; number_of_response:1; }","duration":"118.60052ms","start":"2024-08-16T00:54:28.542712Z","end":"2024-08-16T00:54:28.661312Z","steps":["trace[1224084291] 'process raft request'  (duration: 116.604502ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:54:49 up 21 min,  0 users,  load average: 0.10, 0.14, 0.10
	Linux default-k8s-diff-port-616827 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] <==
	I0816 00:49:57.853735       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:49:57.853781       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:51:57.854821       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:51:57.854944       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 00:51:57.854870       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:51:57.855232       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 00:51:57.856762       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:51:57.856858       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:53:56.854016       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:53:56.854833       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 00:53:57.856745       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:53:57.856890       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0816 00:53:57.857049       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:53:57.857207       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:53:57.858053       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:53:57.859271       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] <==
	E0816 00:49:30.438941       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:49:31.024926       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:49:46.596984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-616827"
	E0816 00:50:00.445372       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:50:01.031999       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:50:16.478769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.290845ms"
	I0816 00:50:29.475818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="264.171µs"
	E0816 00:50:30.451815       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:50:31.039405       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:51:00.458378       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:51:01.047290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:51:30.466051       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:51:31.057028       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:52:00.473053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:52:01.064657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:52:30.481865       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:52:31.071705       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:53:00.488514       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:53:01.080176       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:53:30.495189       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:53:31.089521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:54:00.501693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:54:01.097628       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:54:30.507270       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:54:31.104873       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 00:33:58.193675       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 00:33:58.205818       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.128"]
	E0816 00:33:58.205892       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 00:33:58.241695       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 00:33:58.241774       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 00:33:58.241805       1 server_linux.go:169] "Using iptables Proxier"
	I0816 00:33:58.244716       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 00:33:58.244984       1 server.go:483] "Version info" version="v1.31.0"
	I0816 00:33:58.245011       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:33:58.246550       1 config.go:197] "Starting service config controller"
	I0816 00:33:58.246590       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 00:33:58.246612       1 config.go:104] "Starting endpoint slice config controller"
	I0816 00:33:58.246615       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 00:33:58.247592       1 config.go:326] "Starting node config controller"
	I0816 00:33:58.247673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 00:33:58.347018       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 00:33:58.347125       1 shared_informer.go:320] Caches are synced for service config
	I0816 00:33:58.348454       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] <==
	I0816 00:33:54.052741       1 serving.go:386] Generated self-signed cert in-memory
	W0816 00:33:56.776593       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 00:33:56.776723       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 00:33:56.776816       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 00:33:56.776842       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 00:33:56.827441       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 00:33:56.827490       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:33:56.835559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 00:33:56.835664       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 00:33:56.835692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 00:33:56.835816       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 00:33:56.937583       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 00:53:42 default-k8s-diff-port-616827 kubelet[934]: E0816 00:53:42.460409     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:53:42 default-k8s-diff-port-616827 kubelet[934]: E0816 00:53:42.774863     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769622774534321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:42 default-k8s-diff-port-616827 kubelet[934]: E0816 00:53:42.774906     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769622774534321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:52 default-k8s-diff-port-616827 kubelet[934]: E0816 00:53:52.503862     934 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 00:53:52 default-k8s-diff-port-616827 kubelet[934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 00:53:52 default-k8s-diff-port-616827 kubelet[934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 00:53:52 default-k8s-diff-port-616827 kubelet[934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 00:53:52 default-k8s-diff-port-616827 kubelet[934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 00:53:52 default-k8s-diff-port-616827 kubelet[934]: E0816 00:53:52.777968     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769632777292868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:52 default-k8s-diff-port-616827 kubelet[934]: E0816 00:53:52.777996     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769632777292868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:54 default-k8s-diff-port-616827 kubelet[934]: E0816 00:53:54.459208     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:54:02 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:02.779037     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769642778736166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:02 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:02.779162     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769642778736166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:05 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:05.461324     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:54:12 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:12.780480     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769652780150878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:12 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:12.781247     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769652780150878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:17 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:17.459197     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:54:22 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:22.783315     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769662782579240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:22 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:22.783925     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769662782579240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:31 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:31.459219     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	Aug 16 00:54:32 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:32.785237     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769672784720705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:32 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:32.785767     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769672784720705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:42 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:42.787311     934 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769682786623690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:42 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:42.787816     934 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769682786623690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:45 default-k8s-diff-port-616827 kubelet[934]: E0816 00:54:45.459745     934 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sxqkg" podUID="6443b455-56f9-4532-8156-847298f5e9eb"
	
	
	==> storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] <==
	I0816 00:34:28.788543       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 00:34:28.798932       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 00:34:28.799199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 00:34:46.197518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 00:34:46.197796       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-616827_53b2e7df-3a1c-4ab3-8ea1-e7f4c14435eb!
	I0816 00:34:46.199233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbab41f0-f88f-4bae-ac33-357844cf541c", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-616827_53b2e7df-3a1c-4ab3-8ea1-e7f4c14435eb became leader
	I0816 00:34:46.298888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-616827_53b2e7df-3a1c-4ab3-8ea1-e7f4c14435eb!
	
	
	==> storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] <==
	I0816 00:33:58.101946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 00:34:28.104915       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-616827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-sxqkg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-616827 describe pod metrics-server-6867b74b74-sxqkg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-616827 describe pod metrics-server-6867b74b74-sxqkg: exit status 1 (59.359507ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-sxqkg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-616827 describe pod metrics-server-6867b74b74-sxqkg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (442.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (310.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-819398 -n no-preload-819398
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-16 00:54:04.695675151 +0000 UTC m=+6519.276254141
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-819398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-819398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.094µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-819398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-819398 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-819398 logs -n 25: (1.285483692s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p bridge-697641                                       | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-067133 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | disable-driver-mounts-067133                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:25 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-819398             | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC | 16 Aug 24 00:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-758469            | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-616827  | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:52 UTC | 16 Aug 24 00:53 UTC |
	| start   | -p newest-cni-504758 --memory=2200 --alsologtostderr   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-504758             | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-504758                                   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-504758                  | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC | 16 Aug 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-504758 --memory=2200 --alsologtostderr   | newest-cni-504758            | jenkins | v1.33.1 | 16 Aug 24 00:53 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:53:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:53:59.873204   85810 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:53:59.873418   85810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:53:59.873432   85810 out.go:358] Setting ErrFile to fd 2...
	I0816 00:53:59.873480   85810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:53:59.874000   85810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:53:59.875082   85810 out.go:352] Setting JSON to false
	I0816 00:53:59.876025   85810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9340,"bootTime":1723760300,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:53:59.876086   85810 start.go:139] virtualization: kvm guest
	I0816 00:53:59.877808   85810 out.go:177] * [newest-cni-504758] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:53:59.879412   85810 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:53:59.879465   85810 notify.go:220] Checking for updates...
	I0816 00:53:59.881689   85810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:53:59.882889   85810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:53:59.883990   85810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:53:59.885260   85810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:53:59.886614   85810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:53:59.888418   85810 config.go:182] Loaded profile config "newest-cni-504758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:53:59.889045   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.889129   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.903823   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
	I0816 00:53:59.904331   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:53:59.904868   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:53:59.904885   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:53:59.905186   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:53:59.905427   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:53:59.905682   85810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:53:59.906014   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.906063   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.920597   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0816 00:53:59.921028   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:53:59.921558   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:53:59.921585   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:53:59.921878   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:53:59.922101   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:53:59.960476   85810 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:53:59.961667   85810 start.go:297] selected driver: kvm2
	I0816 00:53:59.961693   85810 start.go:901] validating driver "kvm2" against &{Name:newest-cni-504758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-504758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:53:59.961862   85810 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:53:59.962612   85810 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:53:59.962691   85810 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:53:59.979820   85810 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:53:59.980325   85810 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 00:53:59.980411   85810 cni.go:84] Creating CNI manager for ""
	I0816 00:53:59.980428   85810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:53:59.980498   85810 start.go:340] cluster config:
	{Name:newest-cni-504758 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-504758 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.148 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:53:59.980658   85810 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:53:59.982679   85810 out.go:177] * Starting "newest-cni-504758" primary control-plane node in "newest-cni-504758" cluster
	I0816 00:53:59.983985   85810 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:53:59.984028   85810 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:53:59.984040   85810 cache.go:56] Caching tarball of preloaded images
	I0816 00:53:59.984135   85810 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:53:59.984149   85810 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0816 00:53:59.984291   85810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/newest-cni-504758/config.json ...
	I0816 00:53:59.984554   85810 start.go:360] acquireMachinesLock for newest-cni-504758: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:53:59.984616   85810 start.go:364] duration metric: took 33.654µs to acquireMachinesLock for "newest-cni-504758"
	I0816 00:53:59.984636   85810 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:53:59.984645   85810 fix.go:54] fixHost starting: 
	I0816 00:53:59.985031   85810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:53:59.985076   85810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:53:59.999635   85810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35639
	I0816 00:54:00.000153   85810 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:54:00.000802   85810 main.go:141] libmachine: Using API Version  1
	I0816 00:54:00.000828   85810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:54:00.001196   85810 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:54:00.001416   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	I0816 00:54:00.001618   85810 main.go:141] libmachine: (newest-cni-504758) Calling .GetState
	I0816 00:54:00.003393   85810 fix.go:112] recreateIfNeeded on newest-cni-504758: state=Stopped err=<nil>
	I0816 00:54:00.003432   85810 main.go:141] libmachine: (newest-cni-504758) Calling .DriverName
	W0816 00:54:00.003592   85810 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:54:00.005439   85810 out.go:177] * Restarting existing kvm2 VM for "newest-cni-504758" ...
	I0816 00:54:00.006544   85810 main.go:141] libmachine: (newest-cni-504758) Calling .Start
	I0816 00:54:00.006728   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring networks are active...
	I0816 00:54:00.007518   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring network default is active
	I0816 00:54:00.007916   85810 main.go:141] libmachine: (newest-cni-504758) Ensuring network mk-newest-cni-504758 is active
	I0816 00:54:00.008361   85810 main.go:141] libmachine: (newest-cni-504758) Getting domain xml...
	I0816 00:54:00.009155   85810 main.go:141] libmachine: (newest-cni-504758) Creating domain...
	I0816 00:54:01.244893   85810 main.go:141] libmachine: (newest-cni-504758) Waiting to get IP...
	I0816 00:54:01.245724   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.246226   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.246290   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.246206   85845 retry.go:31] will retry after 269.680632ms: waiting for machine to come up
	I0816 00:54:01.517881   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.518403   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.518423   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.518361   85845 retry.go:31] will retry after 274.232355ms: waiting for machine to come up
	I0816 00:54:01.793786   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:01.794319   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:01.794348   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:01.794273   85845 retry.go:31] will retry after 416.170581ms: waiting for machine to come up
	I0816 00:54:02.212494   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:02.212959   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:02.213002   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:02.212904   85845 retry.go:31] will retry after 465.478219ms: waiting for machine to come up
	I0816 00:54:02.679458   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:02.679920   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:02.679955   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:02.679889   85845 retry.go:31] will retry after 748.437183ms: waiting for machine to come up
	I0816 00:54:03.429734   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:03.430251   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:03.430274   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:03.430199   85845 retry.go:31] will retry after 895.520052ms: waiting for machine to come up
	I0816 00:54:04.326808   85810 main.go:141] libmachine: (newest-cni-504758) DBG | domain newest-cni-504758 has defined MAC address 52:54:00:15:1d:34 in network mk-newest-cni-504758
	I0816 00:54:04.327193   85810 main.go:141] libmachine: (newest-cni-504758) DBG | unable to find current IP address of domain newest-cni-504758 in network mk-newest-cni-504758
	I0816 00:54:04.327219   85810 main.go:141] libmachine: (newest-cni-504758) DBG | I0816 00:54:04.327161   85845 retry.go:31] will retry after 754.604111ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.317959840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769645317924513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bd8d20a-fdae-41ce-b3dc-a6f0a6cb6f1a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.318904138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ba80b58-4db1-4b18-afc2-82623fdbf1f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.318995270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ba80b58-4db1-4b18-afc2-82623fdbf1f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.319317825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7,PodSandboxId:5d189d1e30f4c889864fa8d722d32f71349ca7e9216ab3ef1b3f2ac90f9b1698,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768784823389322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b813a00-5eeb-468e-8591-e3d83ddb1556,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648,PodSandboxId:7ef517f1733e4b675d9de404f63f0d5ed642f3566154dce7f5175384cf626bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784269560736,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wqr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a3f3eb-5b2c-4bca-a1c6-b33beca82a09,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d,PodSandboxId:7faedbe535f7ba3d9aa5791920129ae1f4dce33577c5a00cefa5d97e6c316cd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784001697037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5gdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
2bb7c6-b9f2-44b2-bff1-e7c5f163c208,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468,PodSandboxId:300eb6af029dd8f572627fabd88ee3f2617fffdda32f6ec7f326a00e85e4eeeb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723768783477900167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nl7g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4697f7b9-3f79-451d-927e-15eb68e88eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e,PodSandboxId:f49268e0f7d8c9800128a7855b6a3cf120983757de5c7ad2314282da4b8b9559,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768772300131134,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567ee3d9ca9b16f959e11b063db2324,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b,PodSandboxId:d1c9dd5db18ce5cf978534a308e54369c13ffd5b6ffec01c10549298d456c46d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768772287724863,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874cabf22af8702efdca4d9dd5ad535a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256,PodSandboxId:aeceefc585992ac479585092f9c98ffd57752d9644b5e8f6689975c675a79167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768772273391465,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac513da8f7badd477e959cdb64321d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07,PodSandboxId:77bff51c2f0926049ae59fc52ec7a5046a459d0e899288505478cfe8017363ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768772222494974,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97,PodSandboxId:2002911dadf2841da6d0ad5d91504520b92c59428ce5f1a3242e50bf610707cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723768483610400780,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ba80b58-4db1-4b18-afc2-82623fdbf1f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.363712582Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1e2ccf2-aa78-4da0-b4d8-dc78486f2609 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.363812391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1e2ccf2-aa78-4da0-b4d8-dc78486f2609 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.365335021Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38ec94a9-cd36-45b3-bfc9-a0bd9cd68083 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.365790920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769645365764282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38ec94a9-cd36-45b3-bfc9-a0bd9cd68083 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.366499685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3ca418a-f7a5-4ce9-85e5-2943ff8712b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.366573732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3ca418a-f7a5-4ce9-85e5-2943ff8712b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.366872285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7,PodSandboxId:5d189d1e30f4c889864fa8d722d32f71349ca7e9216ab3ef1b3f2ac90f9b1698,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768784823389322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b813a00-5eeb-468e-8591-e3d83ddb1556,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648,PodSandboxId:7ef517f1733e4b675d9de404f63f0d5ed642f3566154dce7f5175384cf626bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784269560736,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wqr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a3f3eb-5b2c-4bca-a1c6-b33beca82a09,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d,PodSandboxId:7faedbe535f7ba3d9aa5791920129ae1f4dce33577c5a00cefa5d97e6c316cd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784001697037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5gdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
2bb7c6-b9f2-44b2-bff1-e7c5f163c208,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468,PodSandboxId:300eb6af029dd8f572627fabd88ee3f2617fffdda32f6ec7f326a00e85e4eeeb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723768783477900167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nl7g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4697f7b9-3f79-451d-927e-15eb68e88eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e,PodSandboxId:f49268e0f7d8c9800128a7855b6a3cf120983757de5c7ad2314282da4b8b9559,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768772300131134,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567ee3d9ca9b16f959e11b063db2324,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b,PodSandboxId:d1c9dd5db18ce5cf978534a308e54369c13ffd5b6ffec01c10549298d456c46d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768772287724863,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874cabf22af8702efdca4d9dd5ad535a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256,PodSandboxId:aeceefc585992ac479585092f9c98ffd57752d9644b5e8f6689975c675a79167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768772273391465,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac513da8f7badd477e959cdb64321d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07,PodSandboxId:77bff51c2f0926049ae59fc52ec7a5046a459d0e899288505478cfe8017363ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768772222494974,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97,PodSandboxId:2002911dadf2841da6d0ad5d91504520b92c59428ce5f1a3242e50bf610707cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723768483610400780,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3ca418a-f7a5-4ce9-85e5-2943ff8712b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.411287412Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0db5d951-13ad-47a7-b406-b52acb6c6fe9 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.411368716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0db5d951-13ad-47a7-b406-b52acb6c6fe9 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.412842954Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83c68603-5937-4f4e-9027-2411287d30a6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.413522374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769645413498188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83c68603-5937-4f4e-9027-2411287d30a6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.414238088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41ab3f4c-e9ee-479a-b0db-418f425c855f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.414290485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41ab3f4c-e9ee-479a-b0db-418f425c855f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.414485622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7,PodSandboxId:5d189d1e30f4c889864fa8d722d32f71349ca7e9216ab3ef1b3f2ac90f9b1698,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768784823389322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b813a00-5eeb-468e-8591-e3d83ddb1556,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648,PodSandboxId:7ef517f1733e4b675d9de404f63f0d5ed642f3566154dce7f5175384cf626bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784269560736,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wqr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a3f3eb-5b2c-4bca-a1c6-b33beca82a09,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d,PodSandboxId:7faedbe535f7ba3d9aa5791920129ae1f4dce33577c5a00cefa5d97e6c316cd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784001697037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5gdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
2bb7c6-b9f2-44b2-bff1-e7c5f163c208,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468,PodSandboxId:300eb6af029dd8f572627fabd88ee3f2617fffdda32f6ec7f326a00e85e4eeeb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723768783477900167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nl7g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4697f7b9-3f79-451d-927e-15eb68e88eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e,PodSandboxId:f49268e0f7d8c9800128a7855b6a3cf120983757de5c7ad2314282da4b8b9559,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768772300131134,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567ee3d9ca9b16f959e11b063db2324,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b,PodSandboxId:d1c9dd5db18ce5cf978534a308e54369c13ffd5b6ffec01c10549298d456c46d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768772287724863,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874cabf22af8702efdca4d9dd5ad535a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256,PodSandboxId:aeceefc585992ac479585092f9c98ffd57752d9644b5e8f6689975c675a79167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768772273391465,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac513da8f7badd477e959cdb64321d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07,PodSandboxId:77bff51c2f0926049ae59fc52ec7a5046a459d0e899288505478cfe8017363ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768772222494974,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97,PodSandboxId:2002911dadf2841da6d0ad5d91504520b92c59428ce5f1a3242e50bf610707cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723768483610400780,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41ab3f4c-e9ee-479a-b0db-418f425c855f name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.455937744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfc1c8df-23a4-48b3-bfba-b745077b98c2 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.456015604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfc1c8df-23a4-48b3-bfba-b745077b98c2 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.463740853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f48b7478-3037-4a72-90a4-a7d474550591 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.464235489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769645464204221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f48b7478-3037-4a72-90a4-a7d474550591 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.464778080Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b789156d-e6dc-4de9-9c14-cfe3f59263b2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.464850099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b789156d-e6dc-4de9-9c14-cfe3f59263b2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:54:05 no-preload-819398 crio[729]: time="2024-08-16 00:54:05.465129369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7,PodSandboxId:5d189d1e30f4c889864fa8d722d32f71349ca7e9216ab3ef1b3f2ac90f9b1698,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723768784823389322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b813a00-5eeb-468e-8591-e3d83ddb1556,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648,PodSandboxId:7ef517f1733e4b675d9de404f63f0d5ed642f3566154dce7f5175384cf626bf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784269560736,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wqr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a3f3eb-5b2c-4bca-a1c6-b33beca82a09,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d,PodSandboxId:7faedbe535f7ba3d9aa5791920129ae1f4dce33577c5a00cefa5d97e6c316cd0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723768784001697037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5gdv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e
2bb7c6-b9f2-44b2-bff1-e7c5f163c208,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468,PodSandboxId:300eb6af029dd8f572627fabd88ee3f2617fffdda32f6ec7f326a00e85e4eeeb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723768783477900167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nl7g6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4697f7b9-3f79-451d-927e-15eb68e88eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e,PodSandboxId:f49268e0f7d8c9800128a7855b6a3cf120983757de5c7ad2314282da4b8b9559,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723768772300131134,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e567ee3d9ca9b16f959e11b063db2324,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b,PodSandboxId:d1c9dd5db18ce5cf978534a308e54369c13ffd5b6ffec01c10549298d456c46d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723768772287724863,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874cabf22af8702efdca4d9dd5ad535a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256,PodSandboxId:aeceefc585992ac479585092f9c98ffd57752d9644b5e8f6689975c675a79167,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723768772273391465,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6ac513da8f7badd477e959cdb64321d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07,PodSandboxId:77bff51c2f0926049ae59fc52ec7a5046a459d0e899288505478cfe8017363ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723768772222494974,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97,PodSandboxId:2002911dadf2841da6d0ad5d91504520b92c59428ce5f1a3242e50bf610707cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723768483610400780,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-819398,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b99ed13c1336e45ed6ad79a67d09f849,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b789156d-e6dc-4de9-9c14-cfe3f59263b2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f6b16872e7c9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   5d189d1e30f4c       storage-provisioner
	7966e96977b9c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   7ef517f1733e4       coredns-6f6b679f8f-wqr8r
	6785d05a6b876       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   7faedbe535f7b       coredns-6f6b679f8f-5gdv9
	f5533173575d6       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   300eb6af029dd       kube-proxy-nl7g6
	45de6162ae2e1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   f49268e0f7d8c       etcd-no-preload-819398
	8ef6ec95b8ba0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   14 minutes ago      Running             kube-scheduler            2                   d1c9dd5db18ce       kube-scheduler-no-preload-819398
	f5abeabb7b474       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   14 minutes ago      Running             kube-controller-manager   2                   aeceefc585992       kube-controller-manager-no-preload-819398
	a12ec55a551e3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Running             kube-apiserver            2                   77bff51c2f092       kube-apiserver-no-preload-819398
	d261dba4ec9c9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   2002911dadf28       kube-apiserver-no-preload-819398
	
	
	==> coredns [6785d05a6b876a748d371b942f43af11336c7411d63c0145cb43aed85e0aa51d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7966e96977b9c6f04b0f3c8d86f9e867c59e5aa292a88148c12dc235862e8648] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-819398
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-819398
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774
	                    minikube.k8s.io/name=no-preload-819398
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T00_39_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 00:39:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-819398
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 00:54:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 00:49:59 +0000   Fri, 16 Aug 2024 00:39:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 00:49:59 +0000   Fri, 16 Aug 2024 00:39:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 00:49:59 +0000   Fri, 16 Aug 2024 00:39:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 00:49:59 +0000   Fri, 16 Aug 2024 00:39:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.15
	  Hostname:    no-preload-819398
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bf6cdb904364dc486e6cbe723db5d1c
	  System UUID:                5bf6cdb9-0436-4dc4-86e6-cbe723db5d1c
	  Boot ID:                    44c8d6dd-79df-4822-926d-e4e2fbe958e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5gdv9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-wqr8r                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-819398                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-819398             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-819398    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-nl7g6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-819398             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-dz5h4              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-819398 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-819398 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-819398 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-819398 event: Registered Node no-preload-819398 in Controller
	
	
	==> dmesg <==
	[  +0.060975] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043298] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.190561] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.605423] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581074] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.178451] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.061494] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065291] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.162833] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.150679] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.288770] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[ +16.023834] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.058376] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.149325] systemd-fstab-generator[1431]: Ignoring "noauto" option for root device
	[  +4.162014] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.256111] kauditd_printk_skb: 86 callbacks suppressed
	[Aug16 00:39] systemd-fstab-generator[3078]: Ignoring "noauto" option for root device
	[  +0.065413] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.503380] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +0.082001] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.322754] systemd-fstab-generator[3531]: Ignoring "noauto" option for root device
	[  +0.119334] kauditd_printk_skb: 12 callbacks suppressed
	[Aug16 00:41] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [45de6162ae2e10e3300ffe32e336e3ab34806d97034d3f35175aae5aa80bfe5e] <==
	{"level":"info","ts":"2024-08-16T00:39:32.702988Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.15:2380"}
	{"level":"info","ts":"2024-08-16T00:39:32.703596Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.15:2380"}
	{"level":"info","ts":"2024-08-16T00:39:33.424196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T00:39:33.424309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T00:39:33.424365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 received MsgPreVoteResp from 4e5e32f94c376694 at term 1"}
	{"level":"info","ts":"2024-08-16T00:39:33.424401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T00:39:33.424426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 received MsgVoteResp from 4e5e32f94c376694 at term 2"}
	{"level":"info","ts":"2024-08-16T00:39:33.424452Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e5e32f94c376694 became leader at term 2"}
	{"level":"info","ts":"2024-08-16T00:39:33.424479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e5e32f94c376694 elected leader 4e5e32f94c376694 at term 2"}
	{"level":"info","ts":"2024-08-16T00:39:33.428375Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:39:33.430481Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e5e32f94c376694","local-member-attributes":"{Name:no-preload-819398 ClientURLs:[https://192.168.61.15:2379]}","request-path":"/0/members/4e5e32f94c376694/attributes","cluster-id":"cec272b56a0b2be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T00:39:33.430714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:39:33.430904Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cec272b56a0b2be","local-member-id":"4e5e32f94c376694","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:39:33.433156Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:39:33.433230Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T00:39:33.433370Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T00:39:33.436544Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:39:33.439838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T00:39:33.459199Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T00:39:33.459290Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T00:39:33.460735Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T00:39:33.482856Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.15:2379"}
	{"level":"info","ts":"2024-08-16T00:49:33.475205Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-16T00:49:33.485229Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":686,"took":"9.272361ms","hash":968666005,"current-db-size-bytes":2203648,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2203648,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-16T00:49:33.485363Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":968666005,"revision":686,"compact-revision":-1}
	
	
	==> kernel <==
	 00:54:05 up 19 min,  0 users,  load average: 0.34, 0.23, 0.18
	Linux no-preload-819398 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a12ec55a551e3f5f2f29071296fa47f7b8950e2cbfe9f6a1f3cefb69be76ea07] <==
	W0816 00:49:36.081166       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:49:36.081380       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0816 00:49:36.082376       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:49:36.082476       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:50:36.083167       1 handler_proxy.go:99] no RequestInfo found in the context
	W0816 00:50:36.083535       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:50:36.083792       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0816 00:50:36.083646       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 00:50:36.085005       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:50:36.085198       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0816 00:52:36.085372       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:52:36.085550       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0816 00:52:36.085617       1 handler_proxy.go:99] no RequestInfo found in the context
	E0816 00:52:36.085650       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0816 00:52:36.086785       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 00:52:36.086846       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d261dba4ec9c924355d4f7d3f4b9e4a866f6399d07e8cee1b0c5a7ddb3384a97] <==
	W0816 00:39:24.102891       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.117710       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.200672       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.211779       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.221428       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.271181       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.311341       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.379348       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.412328       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.491760       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.524198       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.545686       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.680026       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:24.814217       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:27.959167       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:28.162616       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:28.685492       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:28.851692       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:28.923132       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.016245       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.041846       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.060767       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.075294       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.144697       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 00:39:29.156243       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f5abeabb7b47437f57c51947a7ac69eac20d4efbeee808eede61bec4d9fe0256] <==
	E0816 00:48:42.140035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:48:42.645502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:49:12.146896       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:49:12.653414       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:49:42.153923       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:49:42.663362       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:49:59.441226       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-819398"
	E0816 00:50:12.161691       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:50:12.677387       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:50:42.169047       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:50:42.685848       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0816 00:50:54.550395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="159.994µs"
	I0816 00:51:06.543125       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="106.507µs"
	E0816 00:51:12.175232       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:51:12.693307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:51:42.181618       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:51:42.702535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:52:12.188901       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:52:12.722194       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:52:42.196373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:52:42.730857       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:53:12.204194       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:53:12.740017       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0816 00:53:42.211531       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0816 00:53:42.749831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f5533173575d6d28dd135acfbade9b483d69062563f9c2f76206b680a3719468] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 00:39:43.883727       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 00:39:43.895937       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.15"]
	E0816 00:39:43.896000       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 00:39:43.984283       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 00:39:43.985813       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 00:39:43.985894       1 server_linux.go:169] "Using iptables Proxier"
	I0816 00:39:44.000580       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 00:39:44.000808       1 server.go:483] "Version info" version="v1.31.0"
	I0816 00:39:44.000819       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 00:39:44.007960       1 config.go:197] "Starting service config controller"
	I0816 00:39:44.008133       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 00:39:44.008235       1 config.go:104] "Starting endpoint slice config controller"
	I0816 00:39:44.008263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 00:39:44.012553       1 config.go:326] "Starting node config controller"
	I0816 00:39:44.012630       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 00:39:44.113401       1 shared_informer.go:320] Caches are synced for node config
	I0816 00:39:44.113460       1 shared_informer.go:320] Caches are synced for service config
	I0816 00:39:44.113511       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8ef6ec95b8ba0e66e46bfd672285d20f04d88090cebcc0a304809e2ad5c4db1b] <==
	W0816 00:39:35.125008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 00:39:35.125036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:35.125158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 00:39:35.125188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:35.125232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 00:39:35.125243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:35.125281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 00:39:35.125309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:35.928288       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 00:39:35.928354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.017805       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 00:39:36.017862       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0816 00:39:36.091805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 00:39:36.091859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.146604       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 00:39:36.146653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.146713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 00:39:36.146724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.346030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 00:39:36.346129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.367204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 00:39:36.367255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 00:39:36.384596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 00:39:36.384648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0816 00:39:37.918628       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 00:52:56 no-preload-819398 kubelet[3408]: E0816 00:52:56.527383    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:52:57 no-preload-819398 kubelet[3408]: E0816 00:52:57.786382    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769577785244490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:52:57 no-preload-819398 kubelet[3408]: E0816 00:52:57.786435    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769577785244490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:07 no-preload-819398 kubelet[3408]: E0816 00:53:07.788785    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769587788269627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:07 no-preload-819398 kubelet[3408]: E0816 00:53:07.788831    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769587788269627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:08 no-preload-819398 kubelet[3408]: E0816 00:53:08.527281    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:53:17 no-preload-819398 kubelet[3408]: E0816 00:53:17.790447    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769597790168778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:17 no-preload-819398 kubelet[3408]: E0816 00:53:17.790496    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769597790168778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:23 no-preload-819398 kubelet[3408]: E0816 00:53:23.526891    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:53:27 no-preload-819398 kubelet[3408]: E0816 00:53:27.791535    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769607791346788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:27 no-preload-819398 kubelet[3408]: E0816 00:53:27.791556    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769607791346788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:37 no-preload-819398 kubelet[3408]: E0816 00:53:37.550562    3408 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 00:53:37 no-preload-819398 kubelet[3408]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 00:53:37 no-preload-819398 kubelet[3408]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 00:53:37 no-preload-819398 kubelet[3408]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 00:53:37 no-preload-819398 kubelet[3408]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 00:53:37 no-preload-819398 kubelet[3408]: E0816 00:53:37.793417    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769617793114301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:37 no-preload-819398 kubelet[3408]: E0816 00:53:37.793442    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769617793114301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:38 no-preload-819398 kubelet[3408]: E0816 00:53:38.527315    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:53:47 no-preload-819398 kubelet[3408]: E0816 00:53:47.794939    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769627794707538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:47 no-preload-819398 kubelet[3408]: E0816 00:53:47.794980    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769627794707538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:51 no-preload-819398 kubelet[3408]: E0816 00:53:51.527361    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	Aug 16 00:53:57 no-preload-819398 kubelet[3408]: E0816 00:53:57.796576    3408 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769637796014892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:53:57 no-preload-819398 kubelet[3408]: E0816 00:53:57.797143    3408 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769637796014892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 16 00:54:02 no-preload-819398 kubelet[3408]: E0816 00:54:02.527938    3408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dz5h4" podUID="02a73f5f-79ef-4563-81e1-afb5ad8e2e38"
	
	
	==> storage-provisioner [f6b16872e7c9a9093f2db5519f1a81fc1978dac654132e59ba7f2cce41e8a3f7] <==
	I0816 00:39:44.909499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 00:39:44.919770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 00:39:44.921370       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 00:39:44.931933       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 00:39:44.932207       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-819398_8c560925-8e6c-46e7-a19f-5e6bb7d0cd3f!
	I0816 00:39:44.934517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c373b1e-4f23-4ee3-b37a-25fd9a0ead7f", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-819398_8c560925-8e6c-46e7-a19f-5e6bb7d0cd3f became leader
	I0816 00:39:45.033350       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-819398_8c560925-8e6c-46e7-a19f-5e6bb7d0cd3f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-819398 -n no-preload-819398
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-819398 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-dz5h4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-819398 describe pod metrics-server-6867b74b74-dz5h4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-819398 describe pod metrics-server-6867b74b74-dz5h4: exit status 1 (63.898468ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-dz5h4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-819398 describe pod metrics-server-6867b74b74-dz5h4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (310.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (96.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:51:31.509002   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:52:09.801328   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/calico-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:52:28.472699   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
E0816 00:52:51.160175   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.137:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.137:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 2 (231.246741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-098619" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-098619 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-098619 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.564µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-098619 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 2 (215.655686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-098619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-098619 logs -n 25: (1.643939718s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-697641 sudo cat                              | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo                                  | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo find                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-697641 sudo crio                             | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-697641                                       | bridge-697641                | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-067133 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:24 UTC |
	|         | disable-driver-mounts-067133                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:24 UTC | 16 Aug 24 00:25 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-819398             | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC | 16 Aug 24 00:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-758469            | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-616827  | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC | 16 Aug 24 00:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:26 UTC |                     |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-098619        | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-819398                  | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-819398                                   | no-preload-819398            | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-758469                 | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-616827       | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-758469                                  | embed-certs-758469           | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-616827 | jenkins | v1.33.1 | 16 Aug 24 00:28 UTC | 16 Aug 24 00:38 UTC |
	|         | default-k8s-diff-port-616827                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-098619             | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC | 16 Aug 24 00:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-098619                              | old-k8s-version-098619       | jenkins | v1.33.1 | 16 Aug 24 00:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 00:29:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 00:29:51.785297   79191 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:29:51.785388   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785392   79191 out.go:358] Setting ErrFile to fd 2...
	I0816 00:29:51.785396   79191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:29:51.785578   79191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:29:51.786145   79191 out.go:352] Setting JSON to false
	I0816 00:29:51.787066   79191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7892,"bootTime":1723760300,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:29:51.787122   79191 start.go:139] virtualization: kvm guest
	I0816 00:29:51.789057   79191 out.go:177] * [old-k8s-version-098619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:29:51.790274   79191 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:29:51.790269   79191 notify.go:220] Checking for updates...
	I0816 00:29:51.792828   79191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:29:51.794216   79191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:29:51.795553   79191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:29:51.796761   79191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:29:51.798018   79191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:29:51.799561   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:29:51.799935   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.799990   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.814617   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0816 00:29:51.815056   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.815584   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.815606   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.815933   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.816131   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:51.817809   79191 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 00:29:51.819204   79191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:29:51.819604   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:29:51.819652   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:29:51.834270   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0816 00:29:51.834584   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:29:51.834992   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:29:51.835015   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:29:51.835303   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:29:51.835478   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:29:49.226097   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:51.870472   79191 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 00:29:51.872031   79191 start.go:297] selected driver: kvm2
	I0816 00:29:51.872049   79191 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.872137   79191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:29:51.872785   79191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.872848   79191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 00:29:51.887731   79191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0816 00:29:51.888078   79191 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:29:51.888141   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:29:51.888154   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:29:51.888203   79191 start.go:340] cluster config:
	{Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:29:51.888300   79191 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 00:29:51.890190   79191 out.go:177] * Starting "old-k8s-version-098619" primary control-plane node in "old-k8s-version-098619" cluster
	I0816 00:29:51.891529   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:29:51.891557   79191 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0816 00:29:51.891565   79191 cache.go:56] Caching tarball of preloaded images
	I0816 00:29:51.891645   79191 preload.go:172] Found /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0816 00:29:51.891664   79191 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0816 00:29:51.891747   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:29:51.891915   79191 start.go:360] acquireMachinesLock for old-k8s-version-098619: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:29:55.306158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:29:58.378266   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:04.458137   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:07.530158   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:13.610160   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:16.682057   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:22.762088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:25.834157   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:31.914106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:34.986091   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:41.066143   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:44.138152   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:50.218140   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:53.290166   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:30:59.370080   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:02.442130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:08.522126   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:11.594144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:17.674104   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:20.746185   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:26.826131   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:29.898113   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:35.978100   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:39.050136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:45.130120   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:48.202078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:54.282078   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:31:57.354088   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:03.434136   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:06.506153   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:12.586125   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:15.658144   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:21.738130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:24.810191   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:30.890130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:33.962132   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:40.042062   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:43.114154   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:49.194151   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:52.266130   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:32:58.346106   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:01.418139   78489 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.15:22: connect: no route to host
	I0816 00:33:04.422042   78713 start.go:364] duration metric: took 4m25.166768519s to acquireMachinesLock for "embed-certs-758469"
	I0816 00:33:04.422099   78713 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:04.422107   78713 fix.go:54] fixHost starting: 
	I0816 00:33:04.422426   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:04.422458   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:04.437335   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I0816 00:33:04.437779   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:04.438284   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:04.438306   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:04.438646   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:04.438873   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:04.439045   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:04.440597   78713 fix.go:112] recreateIfNeeded on embed-certs-758469: state=Stopped err=<nil>
	I0816 00:33:04.440627   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	W0816 00:33:04.440781   78713 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:04.442527   78713 out.go:177] * Restarting existing kvm2 VM for "embed-certs-758469" ...
	I0816 00:33:04.419735   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:04.419772   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420077   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:33:04.420102   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:33:04.420299   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:33:04.421914   78489 machine.go:96] duration metric: took 4m37.429789672s to provisionDockerMachine
	I0816 00:33:04.421957   78489 fix.go:56] duration metric: took 4m37.451098771s for fixHost
	I0816 00:33:04.421965   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 4m37.451130669s
	W0816 00:33:04.421995   78489 start.go:714] error starting host: provision: host is not running
	W0816 00:33:04.422099   78489 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0816 00:33:04.422111   78489 start.go:729] Will try again in 5 seconds ...
	I0816 00:33:04.443838   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Start
	I0816 00:33:04.444035   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring networks are active...
	I0816 00:33:04.444849   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network default is active
	I0816 00:33:04.445168   78713 main.go:141] libmachine: (embed-certs-758469) Ensuring network mk-embed-certs-758469 is active
	I0816 00:33:04.445491   78713 main.go:141] libmachine: (embed-certs-758469) Getting domain xml...
	I0816 00:33:04.446159   78713 main.go:141] libmachine: (embed-certs-758469) Creating domain...
	I0816 00:33:05.654817   78713 main.go:141] libmachine: (embed-certs-758469) Waiting to get IP...
	I0816 00:33:05.655625   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.656020   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.656064   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.655983   79868 retry.go:31] will retry after 273.341379ms: waiting for machine to come up
	I0816 00:33:05.930542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:05.931038   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:05.931061   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:05.931001   79868 retry.go:31] will retry after 320.172619ms: waiting for machine to come up
	I0816 00:33:06.252718   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.253117   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.253140   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.253091   79868 retry.go:31] will retry after 441.386495ms: waiting for machine to come up
	I0816 00:33:06.695681   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:06.696108   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:06.696134   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:06.696065   79868 retry.go:31] will retry after 491.272986ms: waiting for machine to come up
	I0816 00:33:07.188683   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.189070   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.189092   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.189025   79868 retry.go:31] will retry after 536.865216ms: waiting for machine to come up
	I0816 00:33:07.727831   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:07.728246   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:07.728276   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:07.728193   79868 retry.go:31] will retry after 813.064342ms: waiting for machine to come up
	I0816 00:33:08.543096   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:08.543605   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:08.543637   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:08.543549   79868 retry.go:31] will retry after 1.00495091s: waiting for machine to come up
	I0816 00:33:09.424586   78489 start.go:360] acquireMachinesLock for no-preload-819398: {Name:mk2bb1901c2e94ad7d7514ec24a0540b1ab722dd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 00:33:09.549815   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:09.550226   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:09.550255   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:09.550175   79868 retry.go:31] will retry after 1.483015511s: waiting for machine to come up
	I0816 00:33:11.034879   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:11.035277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:11.035315   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:11.035224   79868 retry.go:31] will retry after 1.513237522s: waiting for machine to come up
	I0816 00:33:12.550817   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:12.551172   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:12.551196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:12.551126   79868 retry.go:31] will retry after 1.483165174s: waiting for machine to come up
	I0816 00:33:14.036748   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:14.037142   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:14.037170   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:14.037087   79868 retry.go:31] will retry after 1.772679163s: waiting for machine to come up
	I0816 00:33:15.811699   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:15.812300   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:15.812334   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:15.812226   79868 retry.go:31] will retry after 3.026936601s: waiting for machine to come up
	I0816 00:33:18.842362   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:18.842759   78713 main.go:141] libmachine: (embed-certs-758469) DBG | unable to find current IP address of domain embed-certs-758469 in network mk-embed-certs-758469
	I0816 00:33:18.842788   78713 main.go:141] libmachine: (embed-certs-758469) DBG | I0816 00:33:18.842715   79868 retry.go:31] will retry after 4.400445691s: waiting for machine to come up
	I0816 00:33:23.247813   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248223   78713 main.go:141] libmachine: (embed-certs-758469) Found IP for machine: 192.168.39.185
	I0816 00:33:23.248254   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has current primary IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.248265   78713 main.go:141] libmachine: (embed-certs-758469) Reserving static IP address...
	I0816 00:33:23.248613   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.248641   78713 main.go:141] libmachine: (embed-certs-758469) DBG | skip adding static IP to network mk-embed-certs-758469 - found existing host DHCP lease matching {name: "embed-certs-758469", mac: "52:54:00:24:07:00", ip: "192.168.39.185"}
	I0816 00:33:23.248654   78713 main.go:141] libmachine: (embed-certs-758469) Reserved static IP address: 192.168.39.185
	I0816 00:33:23.248673   78713 main.go:141] libmachine: (embed-certs-758469) Waiting for SSH to be available...
	I0816 00:33:23.248687   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Getting to WaitForSSH function...
	I0816 00:33:23.250607   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.250931   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.250965   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.251113   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH client type: external
	I0816 00:33:23.251141   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa (-rw-------)
	I0816 00:33:23.251179   78713 main.go:141] libmachine: (embed-certs-758469) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:23.251196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | About to run SSH command:
	I0816 00:33:23.251211   78713 main.go:141] libmachine: (embed-certs-758469) DBG | exit 0
	I0816 00:33:23.373899   78713 main.go:141] libmachine: (embed-certs-758469) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:23.374270   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetConfigRaw
	I0816 00:33:23.374914   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.377034   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377343   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.377370   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.377561   78713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/config.json ...
	I0816 00:33:23.377760   78713 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:23.377776   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:23.378014   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.379950   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380248   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.380277   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.380369   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.380524   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380668   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.380795   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.380950   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.381134   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.381145   78713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:23.486074   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:23.486106   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486462   78713 buildroot.go:166] provisioning hostname "embed-certs-758469"
	I0816 00:33:23.486491   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.486677   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.489520   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.489905   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.489924   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.490108   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.490279   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490427   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.490566   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.490730   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.490901   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.490920   78713 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-758469 && echo "embed-certs-758469" | sudo tee /etc/hostname
	I0816 00:33:23.614635   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-758469
	
	I0816 00:33:23.614671   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.617308   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617673   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.617701   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.617881   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.618087   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.618351   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.618536   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:23.618721   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:23.618746   78713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-758469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-758469/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-758469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:23.734901   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:23.734931   78713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:23.734946   78713 buildroot.go:174] setting up certificates
	I0816 00:33:23.734953   78713 provision.go:84] configureAuth start
	I0816 00:33:23.734961   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetMachineName
	I0816 00:33:23.735255   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:23.737952   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738312   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.738341   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.738445   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.740589   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.740926   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.740953   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.741060   78713 provision.go:143] copyHostCerts
	I0816 00:33:23.741121   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:23.741138   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:23.741203   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:23.741357   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:23.741367   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:23.741393   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:23.741452   78713 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:23.741458   78713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:23.741478   78713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:23.741525   78713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.embed-certs-758469 san=[127.0.0.1 192.168.39.185 embed-certs-758469 localhost minikube]
	I0816 00:33:23.871115   78713 provision.go:177] copyRemoteCerts
	I0816 00:33:23.871167   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:23.871190   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:23.874049   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874505   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:23.874538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:23.874720   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:23.874913   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:23.875079   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:23.875210   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:23.959910   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:23.984454   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:33:24.009067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:24.036195   78713 provision.go:87] duration metric: took 301.229994ms to configureAuth
	I0816 00:33:24.036218   78713 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:24.036389   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:24.036453   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.039196   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039538   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.039562   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.039771   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.039970   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040125   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.040224   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.040372   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.040584   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.040612   78713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:24.550693   78747 start.go:364] duration metric: took 4m44.527028624s to acquireMachinesLock for "default-k8s-diff-port-616827"
	I0816 00:33:24.550757   78747 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:24.550763   78747 fix.go:54] fixHost starting: 
	I0816 00:33:24.551164   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:24.551203   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:24.567741   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0816 00:33:24.568138   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:24.568674   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:33:24.568703   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:24.569017   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:24.569212   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:24.569385   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:33:24.570856   78747 fix.go:112] recreateIfNeeded on default-k8s-diff-port-616827: state=Stopped err=<nil>
	I0816 00:33:24.570901   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	W0816 00:33:24.571074   78747 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:24.572673   78747 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-616827" ...
	I0816 00:33:24.574220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Start
	I0816 00:33:24.574403   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring networks are active...
	I0816 00:33:24.575086   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network default is active
	I0816 00:33:24.575528   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Ensuring network mk-default-k8s-diff-port-616827 is active
	I0816 00:33:24.576033   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Getting domain xml...
	I0816 00:33:24.576734   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Creating domain...
	I0816 00:33:24.314921   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:24.314951   78713 machine.go:96] duration metric: took 937.178488ms to provisionDockerMachine
	I0816 00:33:24.314964   78713 start.go:293] postStartSetup for "embed-certs-758469" (driver="kvm2")
	I0816 00:33:24.314974   78713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:24.315007   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.315405   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:24.315430   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.317962   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318242   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.318270   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.318390   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.318588   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.318763   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.318900   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.400628   78713 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:24.405061   78713 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:24.405082   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:24.405148   78713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:24.405215   78713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:24.405302   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:24.414985   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:24.439646   78713 start.go:296] duration metric: took 124.668147ms for postStartSetup
	I0816 00:33:24.439692   78713 fix.go:56] duration metric: took 20.017583324s for fixHost
	I0816 00:33:24.439719   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.442551   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.442920   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.442954   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.443051   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.443257   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443434   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.443567   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.443740   78713 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:24.443912   78713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0816 00:33:24.443921   78713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:24.550562   78713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768404.525876526
	
	I0816 00:33:24.550588   78713 fix.go:216] guest clock: 1723768404.525876526
	I0816 00:33:24.550599   78713 fix.go:229] Guest: 2024-08-16 00:33:24.525876526 +0000 UTC Remote: 2024-08-16 00:33:24.439696953 +0000 UTC m=+285.318245053 (delta=86.179573ms)
	I0816 00:33:24.550618   78713 fix.go:200] guest clock delta is within tolerance: 86.179573ms
	I0816 00:33:24.550623   78713 start.go:83] releasing machines lock for "embed-certs-758469", held for 20.128541713s
	I0816 00:33:24.550647   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.551090   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:24.554013   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554358   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.554382   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.554572   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555062   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555222   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:24.555279   78713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:24.555330   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.555441   78713 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:24.555463   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:24.558216   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558368   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558542   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558567   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558719   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:24.558723   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558742   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:24.558883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:24.558925   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559074   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:24.559122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559205   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:24.559285   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.559329   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:24.656926   78713 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:24.662590   78713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:24.811290   78713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:24.817486   78713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:24.817570   78713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:24.838317   78713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:24.838342   78713 start.go:495] detecting cgroup driver to use...
	I0816 00:33:24.838396   78713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:24.856294   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:24.875603   78713 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:24.875650   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:24.890144   78713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:24.904327   78713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:25.018130   78713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:25.149712   78713 docker.go:233] disabling docker service ...
	I0816 00:33:25.149795   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:25.165494   78713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:25.179554   78713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:25.330982   78713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:25.476436   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:25.493242   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:25.515688   78713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:25.515762   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.529924   78713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:25.529997   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.541412   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.551836   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.563356   78713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:25.574486   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.585533   78713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.604169   78713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:25.615335   78713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:25.629366   78713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:25.629427   78713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:25.645937   78713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:25.657132   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:25.771891   78713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:25.914817   78713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:25.914904   78713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:25.919572   78713 start.go:563] Will wait 60s for crictl version
	I0816 00:33:25.919620   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:33:25.923419   78713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:25.969387   78713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:25.969484   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.002529   78713 ssh_runner.go:195] Run: crio --version
	I0816 00:33:26.035709   78713 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:26.036921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetIP
	I0816 00:33:26.039638   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040001   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:26.040023   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:26.040254   78713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:26.044444   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:26.057172   78713 kubeadm.go:883] updating cluster {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:26.057326   78713 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:26.057382   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:26.093950   78713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:26.094031   78713 ssh_runner.go:195] Run: which lz4
	I0816 00:33:26.097998   78713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:26.102152   78713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:26.102183   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:27.538323   78713 crio.go:462] duration metric: took 1.440354469s to copy over tarball
	I0816 00:33:27.538400   78713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:25.885210   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting to get IP...
	I0816 00:33:25.886135   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886555   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:25.886620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:25.886538   80004 retry.go:31] will retry after 214.751664ms: waiting for machine to come up
	I0816 00:33:26.103182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103652   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.103677   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.103603   80004 retry.go:31] will retry after 239.667632ms: waiting for machine to come up
	I0816 00:33:26.345223   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.345776   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.345701   80004 retry.go:31] will retry after 474.740445ms: waiting for machine to come up
	I0816 00:33:26.822224   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822682   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:26.822716   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:26.822639   80004 retry.go:31] will retry after 574.324493ms: waiting for machine to come up
	I0816 00:33:27.398433   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398939   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.398971   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.398904   80004 retry.go:31] will retry after 567.388033ms: waiting for machine to come up
	I0816 00:33:27.967686   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968182   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:27.968225   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:27.968093   80004 retry.go:31] will retry after 940.450394ms: waiting for machine to come up
	I0816 00:33:28.910549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911058   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:28.911088   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:28.911031   80004 retry.go:31] will retry after 919.494645ms: waiting for machine to come up
	I0816 00:33:29.832687   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833204   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:29.833244   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:29.833189   80004 retry.go:31] will retry after 1.332024716s: waiting for machine to come up
	I0816 00:33:29.677224   78713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.138774475s)
	I0816 00:33:29.677252   78713 crio.go:469] duration metric: took 2.138901242s to extract the tarball
	I0816 00:33:29.677261   78713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:29.716438   78713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:29.768597   78713 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:29.768622   78713 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:29.768634   78713 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.0 crio true true} ...
	I0816 00:33:29.768787   78713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-758469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:29.768874   78713 ssh_runner.go:195] Run: crio config
	I0816 00:33:29.813584   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:29.813607   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:29.813620   78713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:29.813644   78713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-758469 NodeName:embed-certs-758469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:29.813776   78713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-758469"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:29.813862   78713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:29.825680   78713 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:29.825744   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:29.836314   78713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0816 00:33:29.853030   78713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:29.869368   78713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0816 00:33:29.886814   78713 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:29.890644   78713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:29.903138   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:30.040503   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:30.058323   78713 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469 for IP: 192.168.39.185
	I0816 00:33:30.058351   78713 certs.go:194] generating shared ca certs ...
	I0816 00:33:30.058372   78713 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:30.058559   78713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:30.058624   78713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:30.058638   78713 certs.go:256] generating profile certs ...
	I0816 00:33:30.058778   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/client.key
	I0816 00:33:30.058873   78713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key.0d0e36ad
	I0816 00:33:30.058930   78713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key
	I0816 00:33:30.059101   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:30.059146   78713 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:30.059162   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:30.059197   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:30.059251   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:30.059285   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:30.059345   78713 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:30.060202   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:30.098381   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:30.135142   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:30.175518   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:30.214349   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0816 00:33:30.249278   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:30.273772   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:30.298067   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/embed-certs-758469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:30.324935   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:30.351149   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:30.375636   78713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:30.399250   78713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:30.417646   78713 ssh_runner.go:195] Run: openssl version
	I0816 00:33:30.423691   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:30.435254   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439651   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.439700   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:30.445673   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:30.456779   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:30.467848   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472199   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.472274   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:30.478109   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:30.489481   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:30.500747   78713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505116   78713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.505162   78713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:30.510739   78713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:30.521829   78713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:30.526444   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:30.532373   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:30.538402   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:30.544697   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:30.550762   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:30.556573   78713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:30.562513   78713 kubeadm.go:392] StartCluster: {Name:embed-certs-758469 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-758469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:30.562602   78713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:30.562650   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.607119   78713 cri.go:89] found id: ""
	I0816 00:33:30.607197   78713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:30.617798   78713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:30.617818   78713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:30.617873   78713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:30.627988   78713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:30.628976   78713 kubeconfig.go:125] found "embed-certs-758469" server: "https://192.168.39.185:8443"
	I0816 00:33:30.631601   78713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:30.642001   78713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.185
	I0816 00:33:30.642036   78713 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:30.642047   78713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:30.642088   78713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:30.685946   78713 cri.go:89] found id: ""
	I0816 00:33:30.686049   78713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:30.704130   78713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:30.714467   78713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:30.714490   78713 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:30.714534   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:33:30.723924   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:30.723985   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:30.733804   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:33:30.743345   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:30.743412   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:30.753604   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.763271   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:30.763340   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:30.773121   78713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:33:30.782507   78713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:30.782565   78713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:30.792652   78713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:30.802523   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:30.923193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.206424   78713 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.283195087s)
	I0816 00:33:32.206449   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.435275   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.509193   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:32.590924   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:32.591020   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.091804   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.591198   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:33.607568   78713 api_server.go:72] duration metric: took 1.016656713s to wait for apiserver process to appear ...
	I0816 00:33:33.607596   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:33.607619   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:31.166506   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166900   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:31.166927   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:31.166860   80004 retry.go:31] will retry after 1.213971674s: waiting for machine to come up
	I0816 00:33:32.382376   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382862   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:32.382889   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:32.382821   80004 retry.go:31] will retry after 2.115615681s: waiting for machine to come up
	I0816 00:33:34.501236   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501697   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:34.501725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:34.501646   80004 retry.go:31] will retry after 2.495252025s: waiting for machine to come up
	I0816 00:33:36.334341   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.334374   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.334389   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.351971   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:36.352011   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:36.608364   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:36.614582   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:36.614619   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.107654   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.113352   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.113384   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:37.607902   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:37.614677   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:37.614710   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.108329   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.112493   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.112521   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:38.608061   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:38.613134   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:38.613172   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.107667   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.111920   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:39.111954   78713 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:39.608190   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:33:39.613818   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:33:39.619467   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:39.619490   78713 api_server.go:131] duration metric: took 6.011887872s to wait for apiserver health ...
	I0816 00:33:39.619499   78713 cni.go:84] Creating CNI manager for ""
	I0816 00:33:39.619504   78713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:39.621572   78713 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:36.999158   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999616   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:36.999645   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:36.999576   80004 retry.go:31] will retry after 2.736710806s: waiting for machine to come up
	I0816 00:33:39.737818   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738286   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | unable to find current IP address of domain default-k8s-diff-port-616827 in network mk-default-k8s-diff-port-616827
	I0816 00:33:39.738320   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | I0816 00:33:39.738215   80004 retry.go:31] will retry after 3.3205645s: waiting for machine to come up
	I0816 00:33:39.623254   78713 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:39.633910   78713 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:39.653736   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:39.663942   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:39.663983   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:39.663994   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:39.664044   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:39.664060   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:39.664067   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:33:39.664078   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:39.664089   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:39.664107   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:33:39.664118   78713 system_pods.go:74] duration metric: took 10.358906ms to wait for pod list to return data ...
	I0816 00:33:39.664127   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:39.667639   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:39.667669   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:39.667682   78713 node_conditions.go:105] duration metric: took 3.547018ms to run NodePressure ...
	I0816 00:33:39.667701   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:39.929620   78713 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934264   78713 kubeadm.go:739] kubelet initialised
	I0816 00:33:39.934289   78713 kubeadm.go:740] duration metric: took 4.64037ms waiting for restarted kubelet to initialise ...
	I0816 00:33:39.934299   78713 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:39.938771   78713 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.943735   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943760   78713 pod_ready.go:82] duration metric: took 4.962601ms for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.943772   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.943781   78713 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.947900   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947925   78713 pod_ready.go:82] duration metric: took 4.129605ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.947936   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "etcd-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.947943   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:39.953367   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953400   78713 pod_ready.go:82] duration metric: took 5.445682ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:39.953412   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:39.953422   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.057510   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057533   78713 pod_ready.go:82] duration metric: took 104.099944ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.057543   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.057548   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.458355   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458389   78713 pod_ready.go:82] duration metric: took 400.832009ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.458400   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-proxy-4xc89" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.458408   78713 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:40.857939   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857964   78713 pod_ready.go:82] duration metric: took 399.549123ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:40.857974   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:40.857980   78713 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:41.257101   78713 pod_ready.go:98] node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257126   78713 pod_ready.go:82] duration metric: took 399.13078ms for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:41.257135   78713 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-758469" hosting pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:41.257142   78713 pod_ready.go:39] duration metric: took 1.322827054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:41.257159   78713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:33:41.269076   78713 ops.go:34] apiserver oom_adj: -16
	I0816 00:33:41.269098   78713 kubeadm.go:597] duration metric: took 10.651273415s to restartPrimaryControlPlane
	I0816 00:33:41.269107   78713 kubeadm.go:394] duration metric: took 10.706599955s to StartCluster
	I0816 00:33:41.269127   78713 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.269191   78713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:33:41.271380   78713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:41.271679   78713 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:33:41.271714   78713 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:33:41.271812   78713 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-758469"
	I0816 00:33:41.271834   78713 addons.go:69] Setting default-storageclass=true in profile "embed-certs-758469"
	I0816 00:33:41.271845   78713 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-758469"
	W0816 00:33:41.271858   78713 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:33:41.271874   78713 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-758469"
	I0816 00:33:41.271882   78713 config.go:182] Loaded profile config "embed-certs-758469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:41.271891   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.271860   78713 addons.go:69] Setting metrics-server=true in profile "embed-certs-758469"
	I0816 00:33:41.271934   78713 addons.go:234] Setting addon metrics-server=true in "embed-certs-758469"
	W0816 00:33:41.271952   78713 addons.go:243] addon metrics-server should already be in state true
	I0816 00:33:41.272022   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.272324   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272575   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272604   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272704   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.272718   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.272745   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.274599   78713 out.go:177] * Verifying Kubernetes components...
	I0816 00:33:41.276283   78713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:41.292526   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0816 00:33:41.292560   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0816 00:33:41.292556   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I0816 00:33:41.293000   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293053   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293004   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.293482   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293499   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293592   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293606   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.293625   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293607   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.293891   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293939   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.293976   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.294132   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.294475   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294483   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.294517   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.294522   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.297714   78713 addons.go:234] Setting addon default-storageclass=true in "embed-certs-758469"
	W0816 00:33:41.297747   78713 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:33:41.297787   78713 host.go:66] Checking if "embed-certs-758469" exists ...
	I0816 00:33:41.298192   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.298238   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.310002   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0816 00:33:41.310000   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0816 00:33:41.310469   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310521   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.310899   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.310917   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311027   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.311048   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.311293   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311476   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.311491   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.311642   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.313614   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.313697   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.315474   78713 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:33:41.315484   78713 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:33:41.316719   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0816 00:33:41.316887   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:33:41.316902   78713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:33:41.316921   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.316975   78713 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.316985   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:33:41.316995   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.317061   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.317572   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.317594   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.317941   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.318669   78713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:41.318702   78713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:41.320288   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320668   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.320695   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320726   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.320939   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321122   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321241   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.321267   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.321402   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.321497   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.321547   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.321592   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.321883   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.322021   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.334230   78713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0816 00:33:41.334580   78713 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:41.335088   78713 main.go:141] libmachine: Using API Version  1
	I0816 00:33:41.335107   78713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:41.335387   78713 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:41.335549   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetState
	I0816 00:33:41.336891   78713 main.go:141] libmachine: (embed-certs-758469) Calling .DriverName
	I0816 00:33:41.337084   78713 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.337100   78713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:33:41.337115   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHHostname
	I0816 00:33:41.340204   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340667   78713 main.go:141] libmachine: (embed-certs-758469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:07:00", ip: ""} in network mk-embed-certs-758469: {Iface:virbr2 ExpiryTime:2024-08-16 01:33:15 +0000 UTC Type:0 Mac:52:54:00:24:07:00 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:embed-certs-758469 Clientid:01:52:54:00:24:07:00}
	I0816 00:33:41.340697   78713 main.go:141] libmachine: (embed-certs-758469) DBG | domain embed-certs-758469 has defined IP address 192.168.39.185 and MAC address 52:54:00:24:07:00 in network mk-embed-certs-758469
	I0816 00:33:41.340837   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHPort
	I0816 00:33:41.340987   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHKeyPath
	I0816 00:33:41.341120   78713 main.go:141] libmachine: (embed-certs-758469) Calling .GetSSHUsername
	I0816 00:33:41.341277   78713 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/embed-certs-758469/id_rsa Username:docker}
	I0816 00:33:41.476131   78713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:41.502242   78713 node_ready.go:35] waiting up to 6m0s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:41.559562   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:33:41.575913   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:33:41.575937   78713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:33:41.614763   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:33:41.614784   78713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:33:41.628658   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:33:41.670367   78713 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:41.670393   78713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:33:41.746638   78713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:33:42.849125   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.22043382s)
	I0816 00:33:42.849189   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849202   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849397   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.289807606s)
	I0816 00:33:42.849438   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849448   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849478   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849514   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849524   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849538   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849550   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.849761   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.849803   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.849813   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.849825   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.849833   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.850018   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850033   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.850059   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.850059   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.850078   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856398   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.856419   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.856647   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.856667   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.856676   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901261   78713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.1545817s)
	I0816 00:33:42.901314   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901329   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901619   78713 main.go:141] libmachine: (embed-certs-758469) DBG | Closing plugin on server side
	I0816 00:33:42.901680   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901694   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901704   78713 main.go:141] libmachine: Making call to close driver server
	I0816 00:33:42.901713   78713 main.go:141] libmachine: (embed-certs-758469) Calling .Close
	I0816 00:33:42.901953   78713 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:33:42.901973   78713 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:33:42.901986   78713 addons.go:475] Verifying addon metrics-server=true in "embed-certs-758469"
	I0816 00:33:42.904677   78713 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0816 00:33:42.905802   78713 addons.go:510] duration metric: took 1.634089536s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0816 00:33:43.506584   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:44.254575   79191 start.go:364] duration metric: took 3m52.362627542s to acquireMachinesLock for "old-k8s-version-098619"
	I0816 00:33:44.254648   79191 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:33:44.254659   79191 fix.go:54] fixHost starting: 
	I0816 00:33:44.255099   79191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:33:44.255137   79191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:33:44.271236   79191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0816 00:33:44.271591   79191 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:33:44.272030   79191 main.go:141] libmachine: Using API Version  1
	I0816 00:33:44.272052   79191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:33:44.272328   79191 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:33:44.272503   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:33:44.272660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetState
	I0816 00:33:44.274235   79191 fix.go:112] recreateIfNeeded on old-k8s-version-098619: state=Stopped err=<nil>
	I0816 00:33:44.274272   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	W0816 00:33:44.274415   79191 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:33:44.275978   79191 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-098619" ...
	I0816 00:33:43.059949   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060413   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Found IP for machine: 192.168.50.128
	I0816 00:33:43.060440   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserving static IP address...
	I0816 00:33:43.060479   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has current primary IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.060881   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.060906   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | skip adding static IP to network mk-default-k8s-diff-port-616827 - found existing host DHCP lease matching {name: "default-k8s-diff-port-616827", mac: "52:54:00:6e:4e:04", ip: "192.168.50.128"}
	I0816 00:33:43.060921   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Reserved static IP address: 192.168.50.128
	I0816 00:33:43.060937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Waiting for SSH to be available...
	I0816 00:33:43.060952   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Getting to WaitForSSH function...
	I0816 00:33:43.063249   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063552   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.063592   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.063810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH client type: external
	I0816 00:33:43.063833   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa (-rw-------)
	I0816 00:33:43.063877   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:33:43.063896   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | About to run SSH command:
	I0816 00:33:43.063905   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | exit 0
	I0816 00:33:43.185986   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | SSH cmd err, output: <nil>: 
	I0816 00:33:43.186338   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetConfigRaw
	I0816 00:33:43.186944   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.189324   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189617   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.189643   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.189890   78747 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/config.json ...
	I0816 00:33:43.190166   78747 machine.go:93] provisionDockerMachine start ...
	I0816 00:33:43.190192   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:43.190401   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.192515   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192836   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.192865   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.192940   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.193118   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193280   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.193454   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.193614   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.193812   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.193825   78747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:33:43.290143   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:33:43.290168   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290395   78747 buildroot.go:166] provisioning hostname "default-k8s-diff-port-616827"
	I0816 00:33:43.290422   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.290603   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.293231   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293620   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.293665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.293829   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.294038   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294195   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.294325   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.294479   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.294685   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.294703   78747 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-616827 && echo "default-k8s-diff-port-616827" | sudo tee /etc/hostname
	I0816 00:33:43.406631   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-616827
	
	I0816 00:33:43.406655   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.409271   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409610   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.409641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.409794   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.409984   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410160   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.410321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.410491   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.410670   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.410695   78747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-616827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-616827/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-616827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:33:43.515766   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:33:43.515796   78747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:33:43.515829   78747 buildroot.go:174] setting up certificates
	I0816 00:33:43.515841   78747 provision.go:84] configureAuth start
	I0816 00:33:43.515850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetMachineName
	I0816 00:33:43.516128   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:43.518730   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519055   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.519087   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.519220   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.521186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.521538   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.521691   78747 provision.go:143] copyHostCerts
	I0816 00:33:43.521746   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:33:43.521764   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:33:43.521822   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:33:43.521949   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:33:43.521959   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:33:43.521982   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:33:43.522050   78747 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:33:43.522057   78747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:33:43.522074   78747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:33:43.522132   78747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-616827 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-616827 localhost minikube]
	I0816 00:33:43.601126   78747 provision.go:177] copyRemoteCerts
	I0816 00:33:43.601179   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:33:43.601203   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.603816   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604148   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.604180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.604336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.604549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.604725   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.604863   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:43.686829   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:33:43.712297   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 00:33:43.738057   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 00:33:43.762820   78747 provision.go:87] duration metric: took 246.967064ms to configureAuth
	I0816 00:33:43.762853   78747 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:33:43.763069   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:33:43.763155   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:43.765886   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766256   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:43.766287   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:43.766447   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:43.766641   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766813   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:43.766982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:43.767164   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:43.767318   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:43.767334   78747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:33:44.025337   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:33:44.025373   78747 machine.go:96] duration metric: took 835.190539ms to provisionDockerMachine
	I0816 00:33:44.025387   78747 start.go:293] postStartSetup for "default-k8s-diff-port-616827" (driver="kvm2")
	I0816 00:33:44.025401   78747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:33:44.025416   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.025780   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:33:44.025804   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.028307   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028591   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.028618   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.028740   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.028925   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.029117   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.029281   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.109481   78747 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:33:44.115290   78747 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:33:44.115317   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:33:44.115388   78747 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:33:44.115482   78747 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:33:44.115597   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:33:44.128677   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:44.154643   78747 start.go:296] duration metric: took 129.242138ms for postStartSetup
	I0816 00:33:44.154685   78747 fix.go:56] duration metric: took 19.603921801s for fixHost
	I0816 00:33:44.154705   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.157477   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.157907   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.157937   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.158051   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.158264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158411   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.158580   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.158757   78747 main.go:141] libmachine: Using SSH client type: native
	I0816 00:33:44.158981   78747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0816 00:33:44.158996   78747 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:33:44.254419   78747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768424.226223949
	
	I0816 00:33:44.254443   78747 fix.go:216] guest clock: 1723768424.226223949
	I0816 00:33:44.254452   78747 fix.go:229] Guest: 2024-08-16 00:33:44.226223949 +0000 UTC Remote: 2024-08-16 00:33:44.154688835 +0000 UTC m=+304.265683075 (delta=71.535114ms)
	I0816 00:33:44.254476   78747 fix.go:200] guest clock delta is within tolerance: 71.535114ms
	I0816 00:33:44.254482   78747 start.go:83] releasing machines lock for "default-k8s-diff-port-616827", held for 19.703745588s
	I0816 00:33:44.254504   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.254750   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:44.257516   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.257879   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.257910   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.258111   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258665   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258828   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:33:44.258908   78747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:33:44.258946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.259033   78747 ssh_runner.go:195] Run: cat /version.json
	I0816 00:33:44.259048   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:33:44.261566   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261814   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.261978   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262008   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262112   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262145   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:44.262180   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:44.262254   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:33:44.262390   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262442   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:33:44.262502   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.262549   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:33:44.262642   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:33:44.346934   78747 ssh_runner.go:195] Run: systemctl --version
	I0816 00:33:44.370413   78747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:33:44.519130   78747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:33:44.525276   78747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:33:44.525344   78747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:33:44.549125   78747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:33:44.549154   78747 start.go:495] detecting cgroup driver to use...
	I0816 00:33:44.549227   78747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:33:44.575221   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:33:44.592214   78747 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:33:44.592270   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:33:44.607403   78747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:33:44.629127   78747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:33:44.786185   78747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:33:44.954426   78747 docker.go:233] disabling docker service ...
	I0816 00:33:44.954495   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:33:44.975169   78747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:33:44.994113   78747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:33:45.142572   78747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:33:45.297255   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:33:45.313401   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:33:45.334780   78747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:33:45.334851   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.346039   78747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:33:45.346111   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.357681   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.368607   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.381164   78747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:33:45.394060   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.406010   78747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.424720   78747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:33:45.437372   78747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:33:45.450515   78747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:33:45.450595   78747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:33:45.465740   78747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:33:45.476568   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:45.629000   78747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:33:45.781044   78747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:33:45.781142   78747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:33:45.787480   78747 start.go:563] Will wait 60s for crictl version
	I0816 00:33:45.787551   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:33:45.791907   78747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:33:45.836939   78747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:33:45.837025   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.869365   78747 ssh_runner.go:195] Run: crio --version
	I0816 00:33:45.907162   78747 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:33:44.277288   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .Start
	I0816 00:33:44.277426   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring networks are active...
	I0816 00:33:44.278141   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network default is active
	I0816 00:33:44.278471   79191 main.go:141] libmachine: (old-k8s-version-098619) Ensuring network mk-old-k8s-version-098619 is active
	I0816 00:33:44.278820   79191 main.go:141] libmachine: (old-k8s-version-098619) Getting domain xml...
	I0816 00:33:44.279523   79191 main.go:141] libmachine: (old-k8s-version-098619) Creating domain...
	I0816 00:33:45.643704   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting to get IP...
	I0816 00:33:45.644691   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.645213   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.645247   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.645162   80212 retry.go:31] will retry after 198.057532ms: waiting for machine to come up
	I0816 00:33:45.844756   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:45.845297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:45.845321   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:45.845247   80212 retry.go:31] will retry after 288.630433ms: waiting for machine to come up
	I0816 00:33:46.135913   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.136413   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.136442   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.136365   80212 retry.go:31] will retry after 456.48021ms: waiting for machine to come up
	I0816 00:33:46.594170   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:46.594649   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:46.594678   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:46.594592   80212 retry.go:31] will retry after 501.49137ms: waiting for machine to come up
	I0816 00:33:46.006040   78713 node_ready.go:53] node "embed-certs-758469" has status "Ready":"False"
	I0816 00:33:47.007144   78713 node_ready.go:49] node "embed-certs-758469" has status "Ready":"True"
	I0816 00:33:47.007172   78713 node_ready.go:38] duration metric: took 5.504897396s for node "embed-certs-758469" to be "Ready" ...
	I0816 00:33:47.007183   78713 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:47.014800   78713 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:49.022567   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:45.908518   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetIP
	I0816 00:33:45.912248   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.912762   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:33:45.912797   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:33:45.913115   78747 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 00:33:45.917917   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:45.935113   78747 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:33:45.935294   78747 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:33:45.935351   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:45.988031   78747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:33:45.988115   78747 ssh_runner.go:195] Run: which lz4
	I0816 00:33:45.992508   78747 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:33:45.997108   78747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:33:45.997199   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0816 00:33:47.459404   78747 crio.go:462] duration metric: took 1.466928999s to copy over tarball
	I0816 00:33:47.459478   78747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:33:49.621449   78747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16194292s)
	I0816 00:33:49.621484   78747 crio.go:469] duration metric: took 2.162054092s to extract the tarball
	I0816 00:33:49.621494   78747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:33:49.660378   78747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:33:49.709446   78747 crio.go:514] all images are preloaded for cri-o runtime.
	I0816 00:33:49.709471   78747 cache_images.go:84] Images are preloaded, skipping loading
	I0816 00:33:49.709481   78747 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.0 crio true true} ...
	I0816 00:33:49.709609   78747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-616827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:33:49.709704   78747 ssh_runner.go:195] Run: crio config
	I0816 00:33:49.756470   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:49.756497   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:49.756510   78747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:33:49.756534   78747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-616827 NodeName:default-k8s-diff-port-616827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:33:49.756745   78747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-616827"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:33:49.756827   78747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:33:49.766769   78747 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:33:49.766840   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:33:49.776367   78747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0816 00:33:49.793191   78747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:33:49.811993   78747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0816 00:33:49.829787   78747 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0816 00:33:49.833673   78747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:33:49.846246   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:33:47.098130   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.098614   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.098645   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.098569   80212 retry.go:31] will retry after 663.568587ms: waiting for machine to come up
	I0816 00:33:47.763930   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:47.764447   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:47.764470   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:47.764376   80212 retry.go:31] will retry after 679.581678ms: waiting for machine to come up
	I0816 00:33:48.446082   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:48.446552   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:48.446579   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:48.446498   80212 retry.go:31] will retry after 1.090430732s: waiting for machine to come up
	I0816 00:33:49.538961   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:49.539454   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:49.539482   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:49.539397   80212 retry.go:31] will retry after 1.039148258s: waiting for machine to come up
	I0816 00:33:50.579642   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:50.580119   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:50.580144   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:50.580074   80212 retry.go:31] will retry after 1.440992413s: waiting for machine to come up
	I0816 00:33:51.788858   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:54.022577   78713 pod_ready.go:103] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:49.963020   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:33:49.980142   78747 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827 for IP: 192.168.50.128
	I0816 00:33:49.980170   78747 certs.go:194] generating shared ca certs ...
	I0816 00:33:49.980192   78747 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:33:49.980408   78747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:33:49.980470   78747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:33:49.980489   78747 certs.go:256] generating profile certs ...
	I0816 00:33:49.980583   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/client.key
	I0816 00:33:49.980669   78747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key.2062a467
	I0816 00:33:49.980737   78747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key
	I0816 00:33:49.980891   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:33:49.980940   78747 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:33:49.980949   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:33:49.980984   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:33:49.981021   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:33:49.981050   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:33:49.981102   78747 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:33:49.981835   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:33:50.014530   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:33:50.057377   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:33:50.085730   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:33:50.121721   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0816 00:33:50.166448   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:33:50.195059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:33:50.220059   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/default-k8s-diff-port-616827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:33:50.244288   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:33:50.268463   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:33:50.293203   78747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:33:50.318859   78747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:33:50.336625   78747 ssh_runner.go:195] Run: openssl version
	I0816 00:33:50.343301   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:33:50.355408   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360245   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.360312   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:33:50.366435   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:33:50.377753   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:33:50.389482   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394337   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.394419   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:33:50.400279   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:33:50.412410   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:33:50.424279   78747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429013   78747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.429077   78747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:33:50.435095   78747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:33:50.448148   78747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:33:50.453251   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:33:50.459730   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:33:50.466145   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:33:50.472438   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:33:50.478701   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:33:50.485081   78747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:33:50.490958   78747 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-616827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-616827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:33:50.491091   78747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:33:50.491173   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.545458   78747 cri.go:89] found id: ""
	I0816 00:33:50.545532   78747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:33:50.557054   78747 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:33:50.557074   78747 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:33:50.557122   78747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:33:50.570313   78747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:33:50.571774   78747 kubeconfig.go:125] found "default-k8s-diff-port-616827" server: "https://192.168.50.128:8444"
	I0816 00:33:50.574969   78747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:33:50.586066   78747 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I0816 00:33:50.586101   78747 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:33:50.586114   78747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:33:50.586172   78747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:33:50.631347   78747 cri.go:89] found id: ""
	I0816 00:33:50.631416   78747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:33:50.651296   78747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:33:50.665358   78747 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:33:50.665387   78747 kubeadm.go:157] found existing configuration files:
	
	I0816 00:33:50.665427   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0816 00:33:50.678634   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:33:50.678706   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:33:50.690376   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0816 00:33:50.702070   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:33:50.702132   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:33:50.714117   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.725349   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:33:50.725413   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:33:50.735691   78747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0816 00:33:50.745524   78747 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:33:50.745598   78747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:33:50.756310   78747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:33:50.771825   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:50.908593   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.046812   78747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138178717s)
	I0816 00:33:52.046863   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.282111   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.357877   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:52.485435   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:33:52.485531   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:52.985717   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.486461   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:33:53.522663   78747 api_server.go:72] duration metric: took 1.037234176s to wait for apiserver process to appear ...
	I0816 00:33:53.522692   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:33:53.522713   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:52.022573   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:52.023319   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:52.023352   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:52.023226   80212 retry.go:31] will retry after 1.814668747s: waiting for machine to come up
	I0816 00:33:53.839539   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:53.839916   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:53.839944   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:53.839861   80212 retry.go:31] will retry after 1.900379439s: waiting for machine to come up
	I0816 00:33:55.742480   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:55.742981   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:55.743004   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:55.742920   80212 retry.go:31] will retry after 2.798728298s: waiting for machine to come up
	I0816 00:33:56.782681   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.782714   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:56.782730   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:56.828595   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:33:56.828628   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:33:57.022870   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.028291   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.028326   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:57.522858   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:57.533079   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:57.533120   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.023304   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.029913   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:33:58.029948   78747 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:33:58.523517   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:33:58.529934   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:33:58.536872   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:33:58.536898   78747 api_server.go:131] duration metric: took 5.014199256s to wait for apiserver health ...
	I0816 00:33:58.536907   78747 cni.go:84] Creating CNI manager for ""
	I0816 00:33:58.536916   78747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:33:58.539004   78747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:33:54.522157   78713 pod_ready.go:93] pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.522186   78713 pod_ready.go:82] duration metric: took 7.507358513s for pod "coredns-6f6b679f8f-54gqb" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.522201   78713 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529305   78713 pod_ready.go:93] pod "etcd-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.529323   78713 pod_ready.go:82] duration metric: took 7.114484ms for pod "etcd-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.529331   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536656   78713 pod_ready.go:93] pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.536688   78713 pod_ready.go:82] duration metric: took 7.349231ms for pod "kube-apiserver-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.536701   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542615   78713 pod_ready.go:93] pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.542637   78713 pod_ready.go:82] duration metric: took 5.927403ms for pod "kube-controller-manager-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.542650   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548165   78713 pod_ready.go:93] pod "kube-proxy-4xc89" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.548188   78713 pod_ready.go:82] duration metric: took 5.530073ms for pod "kube-proxy-4xc89" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.548200   78713 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919561   78713 pod_ready.go:93] pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace has status "Ready":"True"
	I0816 00:33:54.919586   78713 pod_ready.go:82] duration metric: took 371.377774ms for pod "kube-scheduler-embed-certs-758469" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:54.919598   78713 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:56.925892   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.926811   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:33:58.540592   78747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:33:58.554493   78747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:33:58.594341   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:33:58.605247   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:33:58.605293   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:33:58.605304   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:33:58.605314   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:33:58.605329   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:33:58.605342   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:33:58.605351   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:33:58.605358   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:33:58.605363   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:33:58.605372   78747 system_pods.go:74] duration metric: took 11.009517ms to wait for pod list to return data ...
	I0816 00:33:58.605384   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:33:58.609964   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:33:58.609996   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:33:58.610007   78747 node_conditions.go:105] duration metric: took 4.615471ms to run NodePressure ...
	I0816 00:33:58.610025   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:33:58.930292   78747 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937469   78747 kubeadm.go:739] kubelet initialised
	I0816 00:33:58.937499   78747 kubeadm.go:740] duration metric: took 7.181814ms waiting for restarted kubelet to initialise ...
	I0816 00:33:58.937509   78747 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:33:59.036968   78747 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.046554   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046589   78747 pod_ready.go:82] duration metric: took 9.589918ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.046601   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.046618   78747 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.053621   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053654   78747 pod_ready.go:82] duration metric: took 7.022323ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.053669   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.053678   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.065329   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065357   78747 pod_ready.go:82] duration metric: took 11.650757ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.065378   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.065387   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.074595   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074627   78747 pod_ready.go:82] duration metric: took 9.230183ms for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.074643   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.074657   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.399077   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399105   78747 pod_ready.go:82] duration metric: took 324.440722ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.399116   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-proxy-f99ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.399124   78747 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:33:59.797130   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797158   78747 pod_ready.go:82] duration metric: took 398.024149ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	E0816 00:33:59.797169   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:33:59.797176   78747 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:00.197929   78747 pod_ready.go:98] node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197961   78747 pod_ready.go:82] duration metric: took 400.777243ms for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:34:00.197976   78747 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-616827" hosting pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:00.197992   78747 pod_ready.go:39] duration metric: took 1.260464876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:00.198024   78747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:34:00.210255   78747 ops.go:34] apiserver oom_adj: -16
	I0816 00:34:00.210278   78747 kubeadm.go:597] duration metric: took 9.653197586s to restartPrimaryControlPlane
	I0816 00:34:00.210302   78747 kubeadm.go:394] duration metric: took 9.719364617s to StartCluster
	I0816 00:34:00.210322   78747 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.210405   78747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:00.212730   78747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:00.213053   78747 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:34:00.213162   78747 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:34:00.213247   78747 config.go:182] Loaded profile config "default-k8s-diff-port-616827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:00.213277   78747 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213292   78747 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213305   78747 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213313   78747 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:34:00.213344   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213352   78747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-616827"
	I0816 00:34:00.213298   78747 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-616827"
	I0816 00:34:00.213413   78747 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.213435   78747 addons.go:243] addon metrics-server should already be in state true
	I0816 00:34:00.213463   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.213751   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213795   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213752   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213886   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.213756   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.213992   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.215058   78747 out.go:177] * Verifying Kubernetes components...
	I0816 00:34:00.216719   78747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:00.229428   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0816 00:34:00.229676   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38475
	I0816 00:34:00.229881   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230164   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.230522   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230538   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230689   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.230727   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.230850   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.231488   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.231512   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.231754   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.232394   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.232426   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.232909   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0816 00:34:00.233400   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.233959   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.233979   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.234368   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.234576   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.238180   78747 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-616827"
	W0816 00:34:00.238203   78747 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:34:00.238230   78747 host.go:66] Checking if "default-k8s-diff-port-616827" exists ...
	I0816 00:34:00.238598   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.238642   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.249682   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0816 00:34:00.250163   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.250894   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.250919   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.251326   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0816 00:34:00.251324   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.251663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.251828   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.252294   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.252318   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.252863   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.253070   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.253746   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.254958   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.255056   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0816 00:34:00.255513   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.256043   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.256083   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.256121   78747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:00.256494   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.257255   78747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:34:00.257377   78747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:00.257422   78747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:00.259132   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:34:00.259154   78747 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:34:00.259176   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.259204   78747 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.259223   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:34:00.259241   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.263096   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263213   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263688   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263810   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.263850   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263874   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.263996   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264175   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264186   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.264321   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264336   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.264441   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.264511   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.264695   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.274557   78747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0816 00:34:00.274984   78747 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:00.275444   78747 main.go:141] libmachine: Using API Version  1
	I0816 00:34:00.275463   78747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:00.275735   78747 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:00.275946   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetState
	I0816 00:34:00.277509   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .DriverName
	I0816 00:34:00.277745   78747 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.277762   78747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:34:00.277782   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHHostname
	I0816 00:34:00.280264   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280660   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:4e:04", ip: ""} in network mk-default-k8s-diff-port-616827: {Iface:virbr1 ExpiryTime:2024-08-16 01:33:36 +0000 UTC Type:0 Mac:52:54:00:6e:4e:04 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-616827 Clientid:01:52:54:00:6e:4e:04}
	I0816 00:34:00.280689   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | domain default-k8s-diff-port-616827 has defined IP address 192.168.50.128 and MAC address 52:54:00:6e:4e:04 in network mk-default-k8s-diff-port-616827
	I0816 00:34:00.280790   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHPort
	I0816 00:34:00.280982   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHKeyPath
	I0816 00:34:00.281140   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .GetSSHUsername
	I0816 00:34:00.281286   78747 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/default-k8s-diff-port-616827/id_rsa Username:docker}
	I0816 00:34:00.445986   78747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:00.465112   78747 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:00.568927   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:34:00.602693   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:34:00.620335   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:34:00.620355   78747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:34:00.667790   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:34:00.667810   78747 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:34:00.698510   78747 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.698536   78747 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:34:00.723319   78747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:34:00.975635   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.975663   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976006   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976007   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976030   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.976044   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.976075   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.976347   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.976340   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.976376   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:00.983280   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:00.983304   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:00.983587   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:00.983586   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:00.983620   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.678707   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678733   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.678889   78747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.076166351s)
	I0816 00:34:01.678936   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.678955   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679115   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679136   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679145   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679153   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679473   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679497   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679484   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679514   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679521   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679525   78747 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-616827"
	I0816 00:34:01.679528   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.679537   78747 main.go:141] libmachine: Making call to close driver server
	I0816 00:34:01.679544   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) Calling .Close
	I0816 00:34:01.679821   78747 main.go:141] libmachine: (default-k8s-diff-port-616827) DBG | Closing plugin on server side
	I0816 00:34:01.679862   78747 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:34:01.679887   78747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:34:01.683006   78747 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0816 00:33:58.543282   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:33:58.543753   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | unable to find current IP address of domain old-k8s-version-098619 in network mk-old-k8s-version-098619
	I0816 00:33:58.543783   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | I0816 00:33:58.543689   80212 retry.go:31] will retry after 4.402812235s: waiting for machine to come up
	I0816 00:34:00.927244   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:03.428032   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:04.178649   78489 start.go:364] duration metric: took 54.753990439s to acquireMachinesLock for "no-preload-819398"
	I0816 00:34:04.178706   78489 start.go:96] Skipping create...Using existing machine configuration
	I0816 00:34:04.178714   78489 fix.go:54] fixHost starting: 
	I0816 00:34:04.179124   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:34:04.179162   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:34:04.195783   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0816 00:34:04.196138   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:34:04.196590   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:34:04.196614   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:34:04.196962   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:34:04.197161   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:04.197303   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:34:04.198795   78489 fix.go:112] recreateIfNeeded on no-preload-819398: state=Stopped err=<nil>
	I0816 00:34:04.198814   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	W0816 00:34:04.198978   78489 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 00:34:04.200736   78489 out.go:177] * Restarting existing kvm2 VM for "no-preload-819398" ...
	I0816 00:34:01.684641   78747 addons.go:510] duration metric: took 1.471480873s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0816 00:34:02.473603   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:04.476035   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:02.951078   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951631   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has current primary IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.951672   79191 main.go:141] libmachine: (old-k8s-version-098619) Found IP for machine: 192.168.72.137
	I0816 00:34:02.951687   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserving static IP address...
	I0816 00:34:02.952154   79191 main.go:141] libmachine: (old-k8s-version-098619) Reserved static IP address: 192.168.72.137
	I0816 00:34:02.952186   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.952201   79191 main.go:141] libmachine: (old-k8s-version-098619) Waiting for SSH to be available...
	I0816 00:34:02.952224   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | skip adding static IP to network mk-old-k8s-version-098619 - found existing host DHCP lease matching {name: "old-k8s-version-098619", mac: "52:54:00:22:73:72", ip: "192.168.72.137"}
	I0816 00:34:02.952236   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Getting to WaitForSSH function...
	I0816 00:34:02.954361   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954686   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:02.954715   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:02.954791   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH client type: external
	I0816 00:34:02.954830   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa (-rw-------)
	I0816 00:34:02.954871   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:02.954890   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | About to run SSH command:
	I0816 00:34:02.954909   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | exit 0
	I0816 00:34:03.078035   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:03.078408   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetConfigRaw
	I0816 00:34:03.079002   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.081041   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081391   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.081489   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.081566   79191 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/config.json ...
	I0816 00:34:03.081748   79191 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:03.081767   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.082007   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.084022   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084333   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.084357   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.084499   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.084700   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.084867   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.085074   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.085266   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.085509   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.085525   79191 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:03.186066   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:03.186094   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186368   79191 buildroot.go:166] provisioning hostname "old-k8s-version-098619"
	I0816 00:34:03.186397   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.186597   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.189330   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189658   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.189702   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.189792   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.190004   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190185   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.190344   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.190481   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.190665   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.190688   79191 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098619 && echo "old-k8s-version-098619" | sudo tee /etc/hostname
	I0816 00:34:03.304585   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098619
	
	I0816 00:34:03.304608   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.307415   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307732   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.307763   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.307955   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.308155   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308314   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.308474   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.308629   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.308795   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.308811   79191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098619/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:03.418968   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:03.419010   79191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:03.419045   79191 buildroot.go:174] setting up certificates
	I0816 00:34:03.419058   79191 provision.go:84] configureAuth start
	I0816 00:34:03.419072   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetMachineName
	I0816 00:34:03.419338   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:03.421799   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422159   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.422198   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.422401   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.425023   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425417   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.425445   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.425557   79191 provision.go:143] copyHostCerts
	I0816 00:34:03.425624   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:03.425646   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:03.425717   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:03.425875   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:03.425888   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:03.425921   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:03.426007   79191 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:03.426017   79191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:03.426045   79191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:03.426112   79191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098619 san=[127.0.0.1 192.168.72.137 localhost minikube old-k8s-version-098619]
	I0816 00:34:03.509869   79191 provision.go:177] copyRemoteCerts
	I0816 00:34:03.509932   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:03.509961   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.512603   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.512938   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.512984   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.513163   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.513451   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.513617   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.513777   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:03.596330   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 00:34:03.621969   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:03.646778   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:03.671937   79191 provision.go:87] duration metric: took 252.867793ms to configureAuth
	I0816 00:34:03.671964   79191 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:03.672149   79191 config.go:182] Loaded profile config "old-k8s-version-098619": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0816 00:34:03.672250   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.675207   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675600   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.675625   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.675787   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.676006   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676199   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.676360   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.676549   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:03.676762   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:03.676779   79191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:03.945259   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:03.945287   79191 machine.go:96] duration metric: took 863.526642ms to provisionDockerMachine
	I0816 00:34:03.945298   79191 start.go:293] postStartSetup for "old-k8s-version-098619" (driver="kvm2")
	I0816 00:34:03.945308   79191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:03.945335   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:03.945638   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:03.945666   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:03.948590   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.948967   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:03.948989   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:03.949152   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:03.949350   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:03.949491   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:03.949645   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.028994   79191 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:04.033776   79191 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:04.033799   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:04.033872   79191 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:04.033943   79191 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:04.034033   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:04.045492   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:04.071879   79191 start.go:296] duration metric: took 126.569157ms for postStartSetup
	I0816 00:34:04.071920   79191 fix.go:56] duration metric: took 19.817260263s for fixHost
	I0816 00:34:04.071944   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.074942   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075297   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.075325   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.075504   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.075699   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075846   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.075977   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.076146   79191 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:04.076319   79191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.137 22 <nil> <nil>}
	I0816 00:34:04.076332   79191 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:04.178483   79191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768444.133390375
	
	I0816 00:34:04.178510   79191 fix.go:216] guest clock: 1723768444.133390375
	I0816 00:34:04.178519   79191 fix.go:229] Guest: 2024-08-16 00:34:04.133390375 +0000 UTC Remote: 2024-08-16 00:34:04.071925107 +0000 UTC m=+252.320651106 (delta=61.465268ms)
	I0816 00:34:04.178537   79191 fix.go:200] guest clock delta is within tolerance: 61.465268ms
	I0816 00:34:04.178541   79191 start.go:83] releasing machines lock for "old-k8s-version-098619", held for 19.923923778s
	I0816 00:34:04.178567   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.178875   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:04.181999   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182458   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.182490   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.182660   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183192   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183357   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .DriverName
	I0816 00:34:04.183412   79191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:04.183461   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.183553   79191 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:04.183575   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHHostname
	I0816 00:34:04.186192   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186418   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186507   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186531   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186679   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.186811   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:04.186836   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:04.186850   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187016   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187032   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHPort
	I0816 00:34:04.187211   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHKeyPath
	I0816 00:34:04.187215   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.187364   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetSSHUsername
	I0816 00:34:04.187488   79191 sshutil.go:53] new ssh client: &{IP:192.168.72.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/old-k8s-version-098619/id_rsa Username:docker}
	I0816 00:34:04.283880   79191 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:04.289798   79191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:04.436822   79191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:04.443547   79191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:04.443631   79191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:04.464783   79191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:04.464807   79191 start.go:495] detecting cgroup driver to use...
	I0816 00:34:04.464873   79191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:04.481504   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:04.501871   79191 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:04.501942   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:04.521898   79191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:04.538186   79191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:04.704361   79191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:04.881682   79191 docker.go:233] disabling docker service ...
	I0816 00:34:04.881757   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:04.900264   79191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:04.916152   79191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:05.048440   79191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:05.166183   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:05.181888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:05.202525   79191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0816 00:34:05.202592   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.214655   79191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:05.214712   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.226052   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.236878   79191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:05.249217   79191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:05.260362   79191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:05.271039   79191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:05.271108   79191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:05.290423   79191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:05.307175   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:05.465815   79191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:05.640787   79191 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:05.640878   79191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:05.646821   79191 start.go:563] Will wait 60s for crictl version
	I0816 00:34:05.646883   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:05.651455   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:05.698946   79191 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:05.699037   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.729185   79191 ssh_runner.go:195] Run: crio --version
	I0816 00:34:05.772063   79191 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0816 00:34:05.773406   79191 main.go:141] libmachine: (old-k8s-version-098619) Calling .GetIP
	I0816 00:34:05.776689   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777177   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:73:72", ip: ""} in network mk-old-k8s-version-098619: {Iface:virbr3 ExpiryTime:2024-08-16 01:33:56 +0000 UTC Type:0 Mac:52:54:00:22:73:72 Iaid: IPaddr:192.168.72.137 Prefix:24 Hostname:old-k8s-version-098619 Clientid:01:52:54:00:22:73:72}
	I0816 00:34:05.777241   79191 main.go:141] libmachine: (old-k8s-version-098619) DBG | domain old-k8s-version-098619 has defined IP address 192.168.72.137 and MAC address 52:54:00:22:73:72 in network mk-old-k8s-version-098619
	I0816 00:34:05.777435   79191 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:05.782377   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:05.797691   79191 kubeadm.go:883] updating cluster {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:05.797872   79191 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0816 00:34:05.797953   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:05.861468   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:05.861557   79191 ssh_runner.go:195] Run: which lz4
	I0816 00:34:05.866880   79191 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 00:34:05.872036   79191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 00:34:05.872071   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0816 00:34:04.202120   78489 main.go:141] libmachine: (no-preload-819398) Calling .Start
	I0816 00:34:04.202293   78489 main.go:141] libmachine: (no-preload-819398) Ensuring networks are active...
	I0816 00:34:04.203062   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network default is active
	I0816 00:34:04.203345   78489 main.go:141] libmachine: (no-preload-819398) Ensuring network mk-no-preload-819398 is active
	I0816 00:34:04.205286   78489 main.go:141] libmachine: (no-preload-819398) Getting domain xml...
	I0816 00:34:04.206025   78489 main.go:141] libmachine: (no-preload-819398) Creating domain...
	I0816 00:34:05.553661   78489 main.go:141] libmachine: (no-preload-819398) Waiting to get IP...
	I0816 00:34:05.554629   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.555210   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.555309   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.555211   80407 retry.go:31] will retry after 298.759084ms: waiting for machine to come up
	I0816 00:34:05.856046   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:05.856571   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:05.856604   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:05.856530   80407 retry.go:31] will retry after 293.278331ms: waiting for machine to come up
	I0816 00:34:06.151110   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.151542   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.151571   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.151498   80407 retry.go:31] will retry after 332.472371ms: waiting for machine to come up
	I0816 00:34:06.485927   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:06.486487   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:06.486514   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:06.486459   80407 retry.go:31] will retry after 600.720276ms: waiting for machine to come up
	I0816 00:34:05.926954   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.929140   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:06.972334   78747 node_ready.go:53] node "default-k8s-diff-port-616827" has status "Ready":"False"
	I0816 00:34:07.469652   78747 node_ready.go:49] node "default-k8s-diff-port-616827" has status "Ready":"True"
	I0816 00:34:07.469684   78747 node_ready.go:38] duration metric: took 7.004536271s for node "default-k8s-diff-port-616827" to be "Ready" ...
	I0816 00:34:07.469700   78747 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:07.476054   78747 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482839   78747 pod_ready.go:93] pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.482861   78747 pod_ready.go:82] duration metric: took 6.779315ms for pod "coredns-6f6b679f8f-4n9qq" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.482871   78747 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489325   78747 pod_ready.go:93] pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.489348   78747 pod_ready.go:82] duration metric: took 6.470629ms for pod "etcd-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.489357   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495536   78747 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:07.495555   78747 pod_ready.go:82] duration metric: took 6.192295ms for pod "kube-apiserver-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:07.495565   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:09.503258   78747 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:07.631328   79191 crio.go:462] duration metric: took 1.76448771s to copy over tarball
	I0816 00:34:07.631413   79191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 00:34:10.662435   79191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.030990355s)
	I0816 00:34:10.662472   79191 crio.go:469] duration metric: took 3.031115615s to extract the tarball
	I0816 00:34:10.662482   79191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 00:34:10.707627   79191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:10.745704   79191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0816 00:34:10.745742   79191 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.745838   79191 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.745808   79191 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.745914   79191 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.745860   79191 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.745943   79191 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.745884   79191 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.746059   79191 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747781   79191 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.747803   79191 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.747808   79191 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.747824   79191 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.747842   79191 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.747883   79191 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.747895   79191 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0816 00:34:10.747948   79191 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:10.916488   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:10.923947   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:10.931668   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0816 00:34:10.942764   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:10.948555   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:10.957593   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:10.970039   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0816 00:34:11.012673   79191 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 00:34:11.012707   79191 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.012778   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.026267   79191 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:11.135366   79191 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 00:34:11.135398   79191 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.135451   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.149180   79191 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 00:34:11.149226   79191 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.149271   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183480   79191 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 00:34:11.183526   79191 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.183526   79191 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 00:34:11.183578   79191 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.183584   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.183637   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186513   79191 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 00:34:11.186559   79191 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.186622   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186632   79191 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 00:34:11.186658   79191 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0816 00:34:11.186699   79191 ssh_runner.go:195] Run: which crictl
	I0816 00:34:11.186722   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.252857   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.252914   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.252935   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.253007   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.253012   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.253083   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.253140   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420527   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.420559   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.420564   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.420638   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.420732   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0816 00:34:11.420791   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.420813   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591141   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0816 00:34:11.591197   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0816 00:34:11.591267   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0816 00:34:11.591337   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0816 00:34:11.591418   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0816 00:34:11.591453   79191 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0816 00:34:11.591505   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0816 00:34:11.721234   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0816 00:34:11.725967   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0816 00:34:11.731189   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0816 00:34:11.731276   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0816 00:34:11.742195   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0816 00:34:11.742224   79191 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0816 00:34:11.742265   79191 cache_images.go:92] duration metric: took 996.507737ms to LoadCachedImages
	W0816 00:34:11.742327   79191 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0816 00:34:11.742342   79191 kubeadm.go:934] updating node { 192.168.72.137 8443 v1.20.0 crio true true} ...
	I0816 00:34:11.742464   79191 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-098619 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:11.742546   79191 ssh_runner.go:195] Run: crio config
	I0816 00:34:07.089462   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.090073   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.090099   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.089985   80407 retry.go:31] will retry after 666.260439ms: waiting for machine to come up
	I0816 00:34:07.757621   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:07.758156   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:07.758182   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:07.758105   80407 retry.go:31] will retry after 782.571604ms: waiting for machine to come up
	I0816 00:34:08.542021   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:08.542426   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:08.542475   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:08.542381   80407 retry.go:31] will retry after 840.347921ms: waiting for machine to come up
	I0816 00:34:09.384399   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:09.384866   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:09.384893   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:09.384824   80407 retry.go:31] will retry after 1.376690861s: waiting for machine to come up
	I0816 00:34:10.763158   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:10.763547   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:10.763573   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:10.763484   80407 retry.go:31] will retry after 1.237664711s: waiting for machine to come up
	I0816 00:34:10.426656   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:12.429312   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.354758   78747 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.354783   78747 pod_ready.go:82] duration metric: took 3.859210458s for pod "kube-controller-manager-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.354796   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363323   78747 pod_ready.go:93] pod "kube-proxy-f99ds" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.363347   78747 pod_ready.go:82] duration metric: took 8.543406ms for pod "kube-proxy-f99ds" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.363359   78747 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369799   78747 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:11.369826   78747 pod_ready.go:82] duration metric: took 6.458192ms for pod "kube-scheduler-default-k8s-diff-port-616827" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:11.369858   78747 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:13.376479   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:11.791749   79191 cni.go:84] Creating CNI manager for ""
	I0816 00:34:11.791779   79191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:11.791791   79191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:11.791810   79191 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.137 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098619 NodeName:old-k8s-version-098619 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 00:34:11.791969   79191 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-098619"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:11.792046   79191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 00:34:11.802572   79191 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:11.802649   79191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:11.812583   79191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0816 00:34:11.831551   79191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:11.852476   79191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0816 00:34:11.875116   79191 ssh_runner.go:195] Run: grep 192.168.72.137	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:11.879833   79191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:11.893308   79191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:12.038989   79191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:12.061736   79191 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619 for IP: 192.168.72.137
	I0816 00:34:12.061761   79191 certs.go:194] generating shared ca certs ...
	I0816 00:34:12.061780   79191 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.061992   79191 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:12.062046   79191 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:12.062059   79191 certs.go:256] generating profile certs ...
	I0816 00:34:12.062193   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/client.key
	I0816 00:34:12.062283   79191 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key.97f18ce4
	I0816 00:34:12.062343   79191 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key
	I0816 00:34:12.062485   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:12.062523   79191 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:12.062536   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:12.062579   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:12.062614   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:12.062658   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:12.062721   79191 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:12.063630   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:12.106539   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:12.139393   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:12.171548   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:12.213113   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 00:34:12.244334   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 00:34:12.287340   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:12.331047   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/old-k8s-version-098619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 00:34:12.369666   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:12.397260   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:12.424009   79191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:12.450212   79191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:12.471550   79191 ssh_runner.go:195] Run: openssl version
	I0816 00:34:12.479821   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:12.494855   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500546   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.500620   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:12.508817   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:12.521689   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:12.533904   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538789   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.538946   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:12.546762   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:12.561940   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:12.575852   79191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582377   79191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.582457   79191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:12.590772   79191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:12.604976   79191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:12.610332   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:12.617070   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:12.625769   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:12.634342   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:12.641486   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:12.650090   79191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:12.658206   79191 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-098619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.137 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:12.658306   79191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:12.658392   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.703323   79191 cri.go:89] found id: ""
	I0816 00:34:12.703399   79191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:12.714950   79191 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:12.714970   79191 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:12.715047   79191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:12.727051   79191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:12.728059   79191 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098619" does not appear in /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:34:12.728655   79191 kubeconfig.go:62] /home/jenkins/minikube-integration/19452-12919/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098619" cluster setting kubeconfig missing "old-k8s-version-098619" context setting]
	I0816 00:34:12.729552   79191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:12.731269   79191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:12.744732   79191 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.137
	I0816 00:34:12.744766   79191 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:12.744777   79191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:12.744833   79191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:12.783356   79191 cri.go:89] found id: ""
	I0816 00:34:12.783432   79191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:12.801942   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:12.816412   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:12.816433   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:12.816480   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:12.827686   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:12.827757   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:12.838063   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:12.847714   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:12.847808   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:12.858274   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.869328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:12.869389   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:12.881457   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:12.892256   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:12.892325   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:12.902115   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:12.912484   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.040145   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:13.851639   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.085396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.208430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:14.321003   79191 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:14.321084   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:14.822130   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.321780   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:15.822121   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:16.322077   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:12.002977   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:12.003441   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:12.003470   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:12.003401   80407 retry.go:31] will retry after 1.413320186s: waiting for machine to come up
	I0816 00:34:13.418972   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:13.419346   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:13.419374   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:13.419284   80407 retry.go:31] will retry after 2.055525842s: waiting for machine to come up
	I0816 00:34:15.476550   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:15.477044   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:15.477072   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:15.477021   80407 retry.go:31] will retry after 2.728500649s: waiting for machine to come up
	I0816 00:34:14.926133   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.930322   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:15.377291   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:17.877627   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:16.821714   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.321166   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:17.821648   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.321711   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.821520   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.321732   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:19.821325   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.321783   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:20.821958   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:21.321139   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:18.208958   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:18.209350   78489 main.go:141] libmachine: (no-preload-819398) DBG | unable to find current IP address of domain no-preload-819398 in network mk-no-preload-819398
	I0816 00:34:18.209379   78489 main.go:141] libmachine: (no-preload-819398) DBG | I0816 00:34:18.209302   80407 retry.go:31] will retry after 3.922749943s: waiting for machine to come up
	I0816 00:34:19.426265   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.926480   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.134804   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135230   78489 main.go:141] libmachine: (no-preload-819398) Found IP for machine: 192.168.61.15
	I0816 00:34:22.135266   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has current primary IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.135292   78489 main.go:141] libmachine: (no-preload-819398) Reserving static IP address...
	I0816 00:34:22.135596   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.135629   78489 main.go:141] libmachine: (no-preload-819398) DBG | skip adding static IP to network mk-no-preload-819398 - found existing host DHCP lease matching {name: "no-preload-819398", mac: "52:54:00:ee:9f:2c", ip: "192.168.61.15"}
	I0816 00:34:22.135644   78489 main.go:141] libmachine: (no-preload-819398) Reserved static IP address: 192.168.61.15
	I0816 00:34:22.135661   78489 main.go:141] libmachine: (no-preload-819398) Waiting for SSH to be available...
	I0816 00:34:22.135675   78489 main.go:141] libmachine: (no-preload-819398) DBG | Getting to WaitForSSH function...
	I0816 00:34:22.137639   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.137925   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.137956   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.138099   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH client type: external
	I0816 00:34:22.138141   78489 main.go:141] libmachine: (no-preload-819398) DBG | Using SSH private key: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa (-rw-------)
	I0816 00:34:22.138198   78489 main.go:141] libmachine: (no-preload-819398) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 00:34:22.138233   78489 main.go:141] libmachine: (no-preload-819398) DBG | About to run SSH command:
	I0816 00:34:22.138248   78489 main.go:141] libmachine: (no-preload-819398) DBG | exit 0
	I0816 00:34:22.262094   78489 main.go:141] libmachine: (no-preload-819398) DBG | SSH cmd err, output: <nil>: 
	I0816 00:34:22.262496   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetConfigRaw
	I0816 00:34:22.263081   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.265419   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.265746   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.265782   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.266097   78489 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/config.json ...
	I0816 00:34:22.266283   78489 machine.go:93] provisionDockerMachine start ...
	I0816 00:34:22.266301   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:22.266501   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.268848   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269269   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.269308   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.269356   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.269537   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269684   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.269803   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.269971   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.270185   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.270197   78489 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 00:34:22.374848   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 00:34:22.374880   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375169   78489 buildroot.go:166] provisioning hostname "no-preload-819398"
	I0816 00:34:22.375195   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.375407   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.378309   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378649   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.378678   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.378853   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.379060   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379203   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.379362   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.379568   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.379735   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.379749   78489 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-819398 && echo "no-preload-819398" | sudo tee /etc/hostname
	I0816 00:34:22.496438   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-819398
	
	I0816 00:34:22.496467   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.499101   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499411   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.499443   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.499703   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.499912   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500116   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.500247   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.500419   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.500624   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.500650   78489 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-819398' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-819398/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-819398' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 00:34:22.619769   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 00:34:22.619802   78489 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19452-12919/.minikube CaCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19452-12919/.minikube}
	I0816 00:34:22.619826   78489 buildroot.go:174] setting up certificates
	I0816 00:34:22.619837   78489 provision.go:84] configureAuth start
	I0816 00:34:22.619847   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetMachineName
	I0816 00:34:22.620106   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:22.623130   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623485   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.623510   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.623629   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.625964   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626308   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.626335   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.626475   78489 provision.go:143] copyHostCerts
	I0816 00:34:22.626536   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem, removing ...
	I0816 00:34:22.626557   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem
	I0816 00:34:22.626629   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/key.pem (1675 bytes)
	I0816 00:34:22.626756   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem, removing ...
	I0816 00:34:22.626768   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem
	I0816 00:34:22.626798   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/ca.pem (1082 bytes)
	I0816 00:34:22.626889   78489 exec_runner.go:144] found /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem, removing ...
	I0816 00:34:22.626899   78489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem
	I0816 00:34:22.626925   78489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19452-12919/.minikube/cert.pem (1123 bytes)
	I0816 00:34:22.627008   78489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem org=jenkins.no-preload-819398 san=[127.0.0.1 192.168.61.15 localhost minikube no-preload-819398]
	I0816 00:34:22.710036   78489 provision.go:177] copyRemoteCerts
	I0816 00:34:22.710093   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 00:34:22.710120   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.712944   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713380   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.713409   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.713612   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.713780   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.713926   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.714082   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:22.800996   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 00:34:22.828264   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0816 00:34:22.855258   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 00:34:22.880981   78489 provision.go:87] duration metric: took 261.134406ms to configureAuth
	I0816 00:34:22.881013   78489 buildroot.go:189] setting minikube options for container-runtime
	I0816 00:34:22.881176   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:34:22.881240   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:22.883962   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884348   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:22.884368   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:22.884611   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:22.884828   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885052   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:22.885248   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:22.885448   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:22.885639   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:22.885661   78489 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0816 00:34:23.154764   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0816 00:34:23.154802   78489 machine.go:96] duration metric: took 888.504728ms to provisionDockerMachine
	I0816 00:34:23.154821   78489 start.go:293] postStartSetup for "no-preload-819398" (driver="kvm2")
	I0816 00:34:23.154837   78489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 00:34:23.154860   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.155176   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 00:34:23.155205   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.158105   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158482   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.158517   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.158674   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.158864   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.159039   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.159198   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.241041   78489 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 00:34:23.245237   78489 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 00:34:23.245260   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/addons for local assets ...
	I0816 00:34:23.245324   78489 filesync.go:126] Scanning /home/jenkins/minikube-integration/19452-12919/.minikube/files for local assets ...
	I0816 00:34:23.245398   78489 filesync.go:149] local asset: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem -> 200782.pem in /etc/ssl/certs
	I0816 00:34:23.245480   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 00:34:23.254735   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:23.279620   78489 start.go:296] duration metric: took 124.783636ms for postStartSetup
	I0816 00:34:23.279668   78489 fix.go:56] duration metric: took 19.100951861s for fixHost
	I0816 00:34:23.279693   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.282497   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.282959   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.282981   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.283184   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.283376   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283514   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.283687   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.283870   78489 main.go:141] libmachine: Using SSH client type: native
	I0816 00:34:23.284027   78489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.15 22 <nil> <nil>}
	I0816 00:34:23.284037   78489 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 00:34:23.390632   78489 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723768463.360038650
	
	I0816 00:34:23.390658   78489 fix.go:216] guest clock: 1723768463.360038650
	I0816 00:34:23.390668   78489 fix.go:229] Guest: 2024-08-16 00:34:23.36003865 +0000 UTC Remote: 2024-08-16 00:34:23.27967333 +0000 UTC m=+356.445975156 (delta=80.36532ms)
	I0816 00:34:23.390697   78489 fix.go:200] guest clock delta is within tolerance: 80.36532ms
	I0816 00:34:23.390710   78489 start.go:83] releasing machines lock for "no-preload-819398", held for 19.212026147s
	I0816 00:34:23.390729   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.390977   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:23.393728   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394050   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.394071   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.394255   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394722   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394895   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:34:23.394977   78489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 00:34:23.395028   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.395135   78489 ssh_runner.go:195] Run: cat /version.json
	I0816 00:34:23.395151   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:34:23.397773   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.397939   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398196   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398237   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398354   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398480   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:23.398507   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:23.398515   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398717   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.398722   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:34:23.398887   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:34:23.398884   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.399029   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:34:23.399164   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:34:23.497983   78489 ssh_runner.go:195] Run: systemctl --version
	I0816 00:34:23.503896   78489 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0816 00:34:23.660357   78489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 00:34:23.666714   78489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 00:34:23.666775   78489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 00:34:23.684565   78489 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 00:34:23.684586   78489 start.go:495] detecting cgroup driver to use...
	I0816 00:34:23.684655   78489 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 00:34:23.701981   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 00:34:23.715786   78489 docker.go:217] disabling cri-docker service (if available) ...
	I0816 00:34:23.715852   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 00:34:23.733513   78489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 00:34:23.748705   78489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 00:34:23.866341   78489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 00:34:24.016845   78489 docker.go:233] disabling docker service ...
	I0816 00:34:24.016918   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 00:34:24.032673   78489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 00:34:24.046465   78489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 00:34:24.184862   78489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 00:34:24.309066   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 00:34:24.323818   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 00:34:24.344352   78489 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0816 00:34:24.344422   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.355015   78489 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0816 00:34:24.355093   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.365665   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.377238   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.388619   78489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 00:34:24.399306   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.410087   78489 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.428465   78489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0816 00:34:24.439026   78489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 00:34:24.448856   78489 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 00:34:24.448943   78489 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0816 00:34:24.463002   78489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 00:34:24.473030   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:24.587542   78489 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0816 00:34:24.719072   78489 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0816 00:34:24.719159   78489 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0816 00:34:24.723789   78489 start.go:563] Will wait 60s for crictl version
	I0816 00:34:24.723842   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:24.727616   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 00:34:24.766517   78489 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0816 00:34:24.766600   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.795204   78489 ssh_runner.go:195] Run: crio --version
	I0816 00:34:24.824529   78489 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0816 00:34:20.376278   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:22.376510   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:24.876314   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:21.822114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.321350   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:22.821541   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.322014   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:23.821938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.321883   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.821178   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.321881   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:25.821199   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:26.321573   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:24.825725   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetIP
	I0816 00:34:24.828458   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829018   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:34:24.829045   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:34:24.829336   78489 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0816 00:34:24.833711   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:24.847017   78489 kubeadm.go:883] updating cluster {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 00:34:24.847136   78489 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0816 00:34:24.847171   78489 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 00:34:24.883489   78489 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0816 00:34:24.883515   78489 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 00:34:24.883592   78489 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.883612   78489 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.883664   78489 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:24.883690   78489 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.883719   78489 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.883595   78489 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.883927   78489 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.884016   78489 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885061   78489 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:24.885185   78489 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 00:34:24.885207   78489 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:24.885204   78489 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:24.885225   78489 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:24.885157   78489 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.042311   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.042317   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.048181   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.050502   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.059137   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.091688   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0816 00:34:25.096653   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.126261   78489 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0816 00:34:25.126311   78489 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.126368   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.164673   78489 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.189972   78489 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0816 00:34:25.190014   78489 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.190051   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249632   78489 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0816 00:34:25.249674   78489 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.249717   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249780   78489 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0816 00:34:25.249824   78489 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.249884   78489 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0816 00:34:25.249910   78489 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.249887   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.249942   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360038   78489 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0816 00:34:25.360082   78489 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.360121   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360133   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.360191   78489 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 00:34:25.360208   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.360221   78489 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.360256   78489 ssh_runner.go:195] Run: which crictl
	I0816 00:34:25.360283   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.360326   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.360337   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.462610   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.462691   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.480037   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.480114   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.480176   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.480211   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.489343   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.642853   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0816 00:34:25.642913   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.642963   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.645719   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0816 00:34:25.645749   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 00:34:25.645833   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0816 00:34:25.645899   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0816 00:34:25.802574   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 00:34:25.802645   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0816 00:34:25.802687   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.802728   78489 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:34:25.808235   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 00:34:25.808330   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0816 00:34:25.808387   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 00:34:25.808401   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 00:34:25.808432   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:25.808334   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:25.808471   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:25.808480   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:25.816510   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0816 00:34:25.816527   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.816560   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0816 00:34:25.885445   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 00:34:25.885532   78489 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 00:34:25.885549   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:25.885588   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0816 00:34:25.885600   78489 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:25.885674   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0816 00:34:25.885690   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0816 00:34:25.885711   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0816 00:34:24.426102   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.927534   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.877013   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:29.378108   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:26.821489   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.322094   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.321201   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:28.821854   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.321188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:29.821729   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.321316   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:30.821998   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:31.322184   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:27.938767   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.122182459s)
	I0816 00:34:27.938804   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0816 00:34:27.938801   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.05323098s)
	I0816 00:34:27.938826   78489 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.05321158s)
	I0816 00:34:27.938831   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0816 00:34:27.938833   78489 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:27.938843   78489 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0816 00:34:27.938906   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0816 00:34:31.645449   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.706515577s)
	I0816 00:34:31.645486   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0816 00:34:31.645514   78489 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:31.645563   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0816 00:34:29.427463   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.927253   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.875608   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:33.876822   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:31.821361   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.321205   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:32.822088   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.322126   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.821956   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.321921   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:34.821245   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:35.822034   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:36.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:33.625714   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.980118908s)
	I0816 00:34:33.625749   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0816 00:34:33.625773   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:33.625824   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0816 00:34:35.680134   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.054281396s)
	I0816 00:34:35.680167   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0816 00:34:35.680209   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:35.680276   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0816 00:34:34.426416   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.427589   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:38.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:35.877327   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:37.877385   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:36.821567   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.321329   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.822169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.321832   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:38.821404   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.321406   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:39.821914   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.322169   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:40.821149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:41.322125   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:37.430152   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.749849436s)
	I0816 00:34:37.430180   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0816 00:34:37.430208   78489 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:37.430254   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0816 00:34:39.684335   78489 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.254047221s)
	I0816 00:34:39.684365   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0816 00:34:39.684391   78489 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:39.684445   78489 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0816 00:34:40.328672   78489 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19452-12919/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 00:34:40.328722   78489 cache_images.go:123] Successfully loaded all cached images
	I0816 00:34:40.328729   78489 cache_images.go:92] duration metric: took 15.445200533s to LoadCachedImages
	I0816 00:34:40.328743   78489 kubeadm.go:934] updating node { 192.168.61.15 8443 v1.31.0 crio true true} ...
	I0816 00:34:40.328897   78489 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-819398 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 00:34:40.328994   78489 ssh_runner.go:195] Run: crio config
	I0816 00:34:40.383655   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:40.383675   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:40.383685   78489 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 00:34:40.383712   78489 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.15 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-819398 NodeName:no-preload-819398 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 00:34:40.383855   78489 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-819398"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 00:34:40.383930   78489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 00:34:40.395384   78489 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 00:34:40.395457   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 00:34:40.405037   78489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0816 00:34:40.423278   78489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 00:34:40.440963   78489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 00:34:40.458845   78489 ssh_runner.go:195] Run: grep 192.168.61.15	control-plane.minikube.internal$ /etc/hosts
	I0816 00:34:40.462574   78489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 00:34:40.475524   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:34:40.614624   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:34:40.632229   78489 certs.go:68] Setting up /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398 for IP: 192.168.61.15
	I0816 00:34:40.632252   78489 certs.go:194] generating shared ca certs ...
	I0816 00:34:40.632267   78489 certs.go:226] acquiring lock for ca certs: {Name:mkc7c702c85330ff91217d90d2270778ddb79f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:34:40.632430   78489 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key
	I0816 00:34:40.632483   78489 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key
	I0816 00:34:40.632497   78489 certs.go:256] generating profile certs ...
	I0816 00:34:40.632598   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/client.key
	I0816 00:34:40.632679   78489 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key.a9de72ef
	I0816 00:34:40.632759   78489 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key
	I0816 00:34:40.632919   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem (1338 bytes)
	W0816 00:34:40.632962   78489 certs.go:480] ignoring /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078_empty.pem, impossibly tiny 0 bytes
	I0816 00:34:40.632978   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 00:34:40.633011   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/ca.pem (1082 bytes)
	I0816 00:34:40.633042   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/cert.pem (1123 bytes)
	I0816 00:34:40.633068   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/certs/key.pem (1675 bytes)
	I0816 00:34:40.633124   78489 certs.go:484] found cert: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem (1708 bytes)
	I0816 00:34:40.633963   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 00:34:40.676094   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 00:34:40.707032   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 00:34:40.740455   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 00:34:40.778080   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 00:34:40.809950   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 00:34:40.841459   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 00:34:40.866708   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/no-preload-819398/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 00:34:40.893568   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/ssl/certs/200782.pem --> /usr/share/ca-certificates/200782.pem (1708 bytes)
	I0816 00:34:40.917144   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 00:34:40.942349   78489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19452-12919/.minikube/certs/20078.pem --> /usr/share/ca-certificates/20078.pem (1338 bytes)
	I0816 00:34:40.966731   78489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 00:34:40.984268   78489 ssh_runner.go:195] Run: openssl version
	I0816 00:34:40.990614   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/200782.pem && ln -fs /usr/share/ca-certificates/200782.pem /etc/ssl/certs/200782.pem"
	I0816 00:34:41.002909   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007595   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 23:16 /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.007645   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/200782.pem
	I0816 00:34:41.013618   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/200782.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 00:34:41.024886   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 00:34:41.036350   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040801   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 23:06 /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.040845   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 00:34:41.046554   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 00:34:41.057707   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20078.pem && ln -fs /usr/share/ca-certificates/20078.pem /etc/ssl/certs/20078.pem"
	I0816 00:34:41.069566   78489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074107   78489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 23:16 /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.074159   78489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20078.pem
	I0816 00:34:41.080113   78489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20078.pem /etc/ssl/certs/51391683.0"
	I0816 00:34:41.091854   78489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 00:34:41.096543   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 00:34:41.102883   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 00:34:41.109228   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 00:34:41.115622   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 00:34:41.121895   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 00:34:41.128016   78489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 00:34:41.134126   78489 kubeadm.go:392] StartCluster: {Name:no-preload-819398 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-819398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 00:34:41.134230   78489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0816 00:34:41.134310   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.178898   78489 cri.go:89] found id: ""
	I0816 00:34:41.178972   78489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 00:34:41.190167   78489 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 00:34:41.190184   78489 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 00:34:41.190223   78489 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 00:34:41.200385   78489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 00:34:41.201824   78489 kubeconfig.go:125] found "no-preload-819398" server: "https://192.168.61.15:8443"
	I0816 00:34:41.204812   78489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 00:34:41.225215   78489 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.15
	I0816 00:34:41.225252   78489 kubeadm.go:1160] stopping kube-system containers ...
	I0816 00:34:41.225265   78489 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0816 00:34:41.225323   78489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 00:34:41.269288   78489 cri.go:89] found id: ""
	I0816 00:34:41.269377   78489 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 00:34:41.286238   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:34:41.297713   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:34:41.297732   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:34:41.297782   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:34:41.308635   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:34:41.308695   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:34:41.320045   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:34:41.329866   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:34:41.329952   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:34:41.341488   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.351018   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:34:41.351083   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:34:41.360845   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:34:41.370730   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:34:41.370808   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:34:41.382572   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:34:41.392544   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.515558   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:41.425671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:43.426507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:40.377638   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:42.877395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:41.821459   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.821195   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.321938   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.822038   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.321447   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.821571   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.321428   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:45.821496   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:46.322149   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:42.610068   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.094473643s)
	I0816 00:34:42.610106   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.850562   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:42.916519   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:43.042025   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:34:43.042117   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:43.543065   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.043098   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:44.061154   78489 api_server.go:72] duration metric: took 1.019134992s to wait for apiserver process to appear ...
	I0816 00:34:44.061180   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:34:44.061199   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.718683   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.718717   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:46.718730   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:46.785528   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 00:34:46.785559   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 00:34:47.061692   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.066556   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.066590   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:47.562057   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:47.569664   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 00:34:47.569699   78489 api_server.go:103] status: https://192.168.61.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 00:34:48.061258   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:34:48.065926   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:34:48.073136   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:34:48.073165   78489 api_server.go:131] duration metric: took 4.011977616s to wait for apiserver health ...
	I0816 00:34:48.073179   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:34:48.073189   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:34:48.075105   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:34:45.925817   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.925984   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:45.376424   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:47.377794   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.876764   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:46.822140   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.321575   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:47.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.321365   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.822009   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.321536   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:49.821189   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.321387   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:50.821982   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:51.322075   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:48.076340   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:34:48.113148   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:34:48.152316   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:34:48.166108   78489 system_pods.go:59] 8 kube-system pods found
	I0816 00:34:48.166142   78489 system_pods.go:61] "coredns-6f6b679f8f-sv454" [5ba1d55f-4455-4ad1-b3c8-7671ce481dd2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 00:34:48.166154   78489 system_pods.go:61] "etcd-no-preload-819398" [b5e55df3-fb20-4980-928f-31217bf25351] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 00:34:48.166164   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [7670f41c-8439-4782-a3c8-077a144d2998] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 00:34:48.166175   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [61a6080a-5e65-4400-b230-0703f347fc17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 00:34:48.166182   78489 system_pods.go:61] "kube-proxy-xdm7w" [9d0517c5-8cf7-47a0-86d0-c674677e9f46] Running
	I0816 00:34:48.166191   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [af346e37-312a-4225-b3bf-0ddda71022dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 00:34:48.166204   78489 system_pods.go:61] "metrics-server-6867b74b74-mm5l7" [2ebc3f9f-e1a7-47b6-849e-6a4995d13206] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:34:48.166214   78489 system_pods.go:61] "storage-provisioner" [745bbfbd-aedb-4e68-946e-5a7ead1d5b48] Running
	I0816 00:34:48.166223   78489 system_pods.go:74] duration metric: took 13.883212ms to wait for pod list to return data ...
	I0816 00:34:48.166235   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:34:48.170444   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:34:48.170478   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:34:48.170492   78489 node_conditions.go:105] duration metric: took 4.251703ms to run NodePressure ...
	I0816 00:34:48.170520   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 00:34:48.437519   78489 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 00:34:48.441992   78489 kubeadm.go:739] kubelet initialised
	I0816 00:34:48.442015   78489 kubeadm.go:740] duration metric: took 4.465986ms waiting for restarted kubelet to initialise ...
	I0816 00:34:48.442025   78489 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:34:48.447127   78489 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:50.453956   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:49.926184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.926515   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.876909   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.376236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:51.822066   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.321534   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.821154   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.321256   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:53.821510   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.321984   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:54.821175   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.321601   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:55.821215   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:56.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:52.454122   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.954716   78489 pod_ready.go:103] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:54.426224   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.926472   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.376394   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:58.876502   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:34:56.821891   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.321266   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.821346   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.321718   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:58.821304   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.321503   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:59.821302   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.321172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:00.821563   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:01.321323   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:34:57.453951   78489 pod_ready.go:93] pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace has status "Ready":"True"
	I0816 00:34:57.453974   78489 pod_ready.go:82] duration metric: took 9.00682228s for pod "coredns-6f6b679f8f-sv454" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:57.453983   78489 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.460582   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.961243   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:00.961269   78489 pod_ready.go:82] duration metric: took 3.507278873s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:00.961279   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468020   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:01.468047   78489 pod_ready.go:82] duration metric: took 506.758881ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:01.468060   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:34:59.425956   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.925967   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:00.876678   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:03.376662   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:01.821317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.321560   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.821707   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.322110   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:03.821327   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.321430   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:04.821935   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.321559   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:05.821373   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.321230   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:02.975498   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.975522   78489 pod_ready.go:82] duration metric: took 1.50745395s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.975531   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980290   78489 pod_ready.go:93] pod "kube-proxy-xdm7w" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.980316   78489 pod_ready.go:82] duration metric: took 4.778704ms for pod "kube-proxy-xdm7w" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.980328   78489 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988237   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:35:02.988260   78489 pod_ready.go:82] duration metric: took 7.924207ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:02.988268   78489 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	I0816 00:35:04.993992   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:04.426419   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.426648   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.927578   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:05.877102   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:07.877187   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:06.821405   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.321781   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:07.821420   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.321483   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:08.821347   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.321167   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:09.821188   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.321474   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:10.821179   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:11.322114   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:06.994539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:08.995530   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.494248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.425605   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:13.426338   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:10.378729   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:12.875673   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:14.876717   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:11.822105   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.321963   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:12.822172   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.321805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:13.821971   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:14.321784   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:14.321882   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:14.360939   79191 cri.go:89] found id: ""
	I0816 00:35:14.360962   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.360971   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:14.360976   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:14.361028   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:14.397796   79191 cri.go:89] found id: ""
	I0816 00:35:14.397824   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.397836   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:14.397858   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:14.397922   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:14.433924   79191 cri.go:89] found id: ""
	I0816 00:35:14.433950   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.433960   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:14.433968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:14.434024   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:14.468657   79191 cri.go:89] found id: ""
	I0816 00:35:14.468685   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.468696   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:14.468704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:14.468770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:14.505221   79191 cri.go:89] found id: ""
	I0816 00:35:14.505247   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.505256   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:14.505264   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:14.505323   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:14.546032   79191 cri.go:89] found id: ""
	I0816 00:35:14.546062   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.546072   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:14.546079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:14.546147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:14.581260   79191 cri.go:89] found id: ""
	I0816 00:35:14.581284   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.581292   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:14.581298   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:14.581352   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:14.616103   79191 cri.go:89] found id: ""
	I0816 00:35:14.616127   79191 logs.go:276] 0 containers: []
	W0816 00:35:14.616134   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:14.616142   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:14.616153   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:14.690062   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:14.690106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:14.735662   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:14.735699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:14.786049   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:14.786086   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:14.800375   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:14.800405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:14.931822   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:13.494676   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.497759   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:15.925671   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.926279   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.375842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.376005   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:17.432686   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:17.448728   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:17.448806   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:17.496384   79191 cri.go:89] found id: ""
	I0816 00:35:17.496523   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.496568   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:17.496581   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:17.496646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:17.560779   79191 cri.go:89] found id: ""
	I0816 00:35:17.560810   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.560820   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:17.560829   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:17.560891   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:17.606007   79191 cri.go:89] found id: ""
	I0816 00:35:17.606036   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.606047   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:17.606054   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:17.606123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:17.639910   79191 cri.go:89] found id: ""
	I0816 00:35:17.639937   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.639945   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:17.639951   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:17.640030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:17.676534   79191 cri.go:89] found id: ""
	I0816 00:35:17.676563   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.676573   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:17.676581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:17.676645   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:17.716233   79191 cri.go:89] found id: ""
	I0816 00:35:17.716255   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.716262   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:17.716268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:17.716334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:17.753648   79191 cri.go:89] found id: ""
	I0816 00:35:17.753686   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.753696   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:17.753704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:17.753763   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:17.791670   79191 cri.go:89] found id: ""
	I0816 00:35:17.791694   79191 logs.go:276] 0 containers: []
	W0816 00:35:17.791702   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:17.791711   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:17.791722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:17.840616   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:17.840650   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:17.854949   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:17.854981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:17.933699   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:17.933724   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:17.933750   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:18.010177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:18.010211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:20.551384   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:20.564463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:20.564540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:20.604361   79191 cri.go:89] found id: ""
	I0816 00:35:20.604389   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.604399   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:20.604405   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:20.604453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:20.639502   79191 cri.go:89] found id: ""
	I0816 00:35:20.639528   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.639535   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:20.639541   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:20.639590   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:20.676430   79191 cri.go:89] found id: ""
	I0816 00:35:20.676476   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.676484   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:20.676496   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:20.676551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:20.711213   79191 cri.go:89] found id: ""
	I0816 00:35:20.711243   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.711253   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:20.711261   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:20.711320   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:20.745533   79191 cri.go:89] found id: ""
	I0816 00:35:20.745563   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.745574   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:20.745581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:20.745644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:20.781031   79191 cri.go:89] found id: ""
	I0816 00:35:20.781056   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.781064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:20.781071   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:20.781119   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:20.819966   79191 cri.go:89] found id: ""
	I0816 00:35:20.819994   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.820005   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:20.820012   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:20.820096   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:20.859011   79191 cri.go:89] found id: ""
	I0816 00:35:20.859041   79191 logs.go:276] 0 containers: []
	W0816 00:35:20.859052   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:20.859063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:20.859078   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:20.909479   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:20.909513   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:20.925627   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:20.925653   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:21.001707   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:21.001733   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:21.001747   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:21.085853   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:21.085893   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:17.994492   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:20.496255   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:19.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:22.426663   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:21.878587   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.377462   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:23.626499   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:23.640337   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:23.640395   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:23.679422   79191 cri.go:89] found id: ""
	I0816 00:35:23.679449   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.679457   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:23.679463   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:23.679522   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:23.716571   79191 cri.go:89] found id: ""
	I0816 00:35:23.716594   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.716601   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:23.716607   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:23.716660   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:23.752539   79191 cri.go:89] found id: ""
	I0816 00:35:23.752563   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.752573   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:23.752581   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:23.752640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:23.790665   79191 cri.go:89] found id: ""
	I0816 00:35:23.790693   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.790700   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:23.790707   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:23.790757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:23.827695   79191 cri.go:89] found id: ""
	I0816 00:35:23.827719   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.827727   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:23.827733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:23.827792   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:23.867664   79191 cri.go:89] found id: ""
	I0816 00:35:23.867687   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.867695   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:23.867701   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:23.867776   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:23.907844   79191 cri.go:89] found id: ""
	I0816 00:35:23.907871   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.907882   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:23.907890   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:23.907951   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:23.945372   79191 cri.go:89] found id: ""
	I0816 00:35:23.945403   79191 logs.go:276] 0 containers: []
	W0816 00:35:23.945414   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:23.945424   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:23.945438   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:23.998270   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:23.998302   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:24.012794   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:24.012824   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:24.087285   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:24.087308   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:24.087340   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:24.167151   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:24.167184   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:26.710285   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:26.724394   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:26.724453   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:26.764667   79191 cri.go:89] found id: ""
	I0816 00:35:26.764690   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.764698   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:26.764704   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:26.764756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:22.994036   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.995035   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:24.927042   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:27.426054   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.877007   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.376563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:26.806631   79191 cri.go:89] found id: ""
	I0816 00:35:26.806660   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.806670   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:26.806677   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:26.806741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:26.843434   79191 cri.go:89] found id: ""
	I0816 00:35:26.843473   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.843485   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:26.843493   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:26.843576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:26.882521   79191 cri.go:89] found id: ""
	I0816 00:35:26.882556   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.882566   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:26.882574   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:26.882635   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:26.917956   79191 cri.go:89] found id: ""
	I0816 00:35:26.917985   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.917995   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:26.918004   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:26.918056   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:26.953168   79191 cri.go:89] found id: ""
	I0816 00:35:26.953191   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.953199   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:26.953205   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:26.953251   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:26.991366   79191 cri.go:89] found id: ""
	I0816 00:35:26.991397   79191 logs.go:276] 0 containers: []
	W0816 00:35:26.991408   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:26.991416   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:26.991479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:27.028591   79191 cri.go:89] found id: ""
	I0816 00:35:27.028619   79191 logs.go:276] 0 containers: []
	W0816 00:35:27.028626   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:27.028635   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:27.028647   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:27.111613   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:27.111645   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:27.153539   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:27.153575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:27.209377   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:27.209420   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:27.223316   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:27.223343   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:27.301411   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:29.801803   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:29.815545   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:29.815626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:29.853638   79191 cri.go:89] found id: ""
	I0816 00:35:29.853668   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.853678   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:29.853687   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:29.853756   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:29.892532   79191 cri.go:89] found id: ""
	I0816 00:35:29.892554   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.892561   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:29.892567   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:29.892622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:29.932486   79191 cri.go:89] found id: ""
	I0816 00:35:29.932511   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.932519   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:29.932524   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:29.932580   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:29.973161   79191 cri.go:89] found id: ""
	I0816 00:35:29.973194   79191 logs.go:276] 0 containers: []
	W0816 00:35:29.973205   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:29.973213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:29.973275   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:30.009606   79191 cri.go:89] found id: ""
	I0816 00:35:30.009629   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.009637   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:30.009643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:30.009691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:30.045016   79191 cri.go:89] found id: ""
	I0816 00:35:30.045043   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.045050   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:30.045057   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:30.045113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:30.079934   79191 cri.go:89] found id: ""
	I0816 00:35:30.079959   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.079968   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:30.079974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:30.080030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:30.114173   79191 cri.go:89] found id: ""
	I0816 00:35:30.114199   79191 logs.go:276] 0 containers: []
	W0816 00:35:30.114207   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:30.114216   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:30.114227   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:30.154765   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:30.154791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:30.204410   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:30.204442   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:30.218909   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:30.218934   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:30.294141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:30.294161   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:30.294193   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:26.995394   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.494569   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:29.426234   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.926349   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.926433   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:31.376976   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.377869   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:32.872216   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:32.886211   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:32.886289   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:32.929416   79191 cri.go:89] found id: ""
	I0816 00:35:32.929440   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.929449   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:32.929456   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:32.929520   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:32.977862   79191 cri.go:89] found id: ""
	I0816 00:35:32.977887   79191 logs.go:276] 0 containers: []
	W0816 00:35:32.977896   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:32.977920   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:32.977978   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:33.015569   79191 cri.go:89] found id: ""
	I0816 00:35:33.015593   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.015603   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:33.015622   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:33.015681   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:33.050900   79191 cri.go:89] found id: ""
	I0816 00:35:33.050934   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.050943   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:33.050959   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:33.051033   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:33.084529   79191 cri.go:89] found id: ""
	I0816 00:35:33.084556   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.084564   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:33.084569   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:33.084619   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:33.119819   79191 cri.go:89] found id: ""
	I0816 00:35:33.119845   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.119855   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:33.119863   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:33.119928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:33.159922   79191 cri.go:89] found id: ""
	I0816 00:35:33.159952   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.159959   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:33.159965   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:33.160023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:33.194977   79191 cri.go:89] found id: ""
	I0816 00:35:33.195006   79191 logs.go:276] 0 containers: []
	W0816 00:35:33.195018   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:33.195030   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:33.195044   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:33.208578   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:33.208623   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:33.282177   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:33.282198   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:33.282211   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:33.365514   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:33.365552   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:33.405190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:33.405226   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:35.959033   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:35.971866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:35.971934   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:36.008442   79191 cri.go:89] found id: ""
	I0816 00:35:36.008473   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.008483   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:36.008489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:36.008547   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:36.044346   79191 cri.go:89] found id: ""
	I0816 00:35:36.044374   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.044386   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:36.044393   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:36.044444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:36.083078   79191 cri.go:89] found id: ""
	I0816 00:35:36.083104   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.083112   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:36.083118   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:36.083166   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:36.120195   79191 cri.go:89] found id: ""
	I0816 00:35:36.120218   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.120226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:36.120232   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:36.120288   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:36.156186   79191 cri.go:89] found id: ""
	I0816 00:35:36.156215   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.156225   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:36.156233   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:36.156295   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:36.195585   79191 cri.go:89] found id: ""
	I0816 00:35:36.195613   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.195623   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:36.195631   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:36.195699   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:36.231110   79191 cri.go:89] found id: ""
	I0816 00:35:36.231133   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.231141   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:36.231147   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:36.231210   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:36.268745   79191 cri.go:89] found id: ""
	I0816 00:35:36.268770   79191 logs.go:276] 0 containers: []
	W0816 00:35:36.268778   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:36.268786   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:36.268800   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:36.282225   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:36.282251   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:36.351401   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:36.351431   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:36.351447   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:36.429970   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:36.430003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:36.473745   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:36.473776   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:31.994163   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:33.994256   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.995188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:36.427247   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.926123   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:35.877303   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:38.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:39.027444   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:39.041107   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:39.041170   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:39.079807   79191 cri.go:89] found id: ""
	I0816 00:35:39.079830   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.079837   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:39.079843   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:39.079890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:39.115532   79191 cri.go:89] found id: ""
	I0816 00:35:39.115559   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.115569   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:39.115576   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:39.115623   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:39.150197   79191 cri.go:89] found id: ""
	I0816 00:35:39.150222   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.150233   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:39.150241   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:39.150300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:39.186480   79191 cri.go:89] found id: ""
	I0816 00:35:39.186507   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.186515   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:39.186521   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:39.186572   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:39.221576   79191 cri.go:89] found id: ""
	I0816 00:35:39.221605   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.221615   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:39.221620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:39.221669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:39.259846   79191 cri.go:89] found id: ""
	I0816 00:35:39.259877   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.259888   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:39.259896   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:39.259950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:39.294866   79191 cri.go:89] found id: ""
	I0816 00:35:39.294891   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.294898   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:39.294903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:39.294952   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:39.329546   79191 cri.go:89] found id: ""
	I0816 00:35:39.329576   79191 logs.go:276] 0 containers: []
	W0816 00:35:39.329584   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:39.329593   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:39.329604   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:39.371579   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:39.371609   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:39.422903   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:39.422935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:39.437673   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:39.437699   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:39.515146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:39.515171   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:39.515185   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:38.495377   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.495856   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.926444   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:43.426438   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:40.376648   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.877521   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:42.101733   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:42.115563   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:42.115640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:42.155187   79191 cri.go:89] found id: ""
	I0816 00:35:42.155216   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.155224   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:42.155230   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:42.155282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:42.194414   79191 cri.go:89] found id: ""
	I0816 00:35:42.194444   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.194456   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:42.194464   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:42.194523   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:42.234219   79191 cri.go:89] found id: ""
	I0816 00:35:42.234245   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.234253   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:42.234259   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:42.234314   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:42.272278   79191 cri.go:89] found id: ""
	I0816 00:35:42.272304   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.272314   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:42.272322   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:42.272381   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:42.309973   79191 cri.go:89] found id: ""
	I0816 00:35:42.309999   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.310007   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:42.310013   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:42.310066   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:42.350745   79191 cri.go:89] found id: ""
	I0816 00:35:42.350773   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.350782   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:42.350790   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:42.350853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:42.387775   79191 cri.go:89] found id: ""
	I0816 00:35:42.387803   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.387813   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:42.387832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:42.387902   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:42.425086   79191 cri.go:89] found id: ""
	I0816 00:35:42.425110   79191 logs.go:276] 0 containers: []
	W0816 00:35:42.425118   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:42.425125   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:42.425138   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:42.515543   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:42.515575   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:42.558348   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:42.558372   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:42.613026   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:42.613059   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.628907   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:42.628932   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:42.710265   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.211083   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:45.225001   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:45.225083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:45.258193   79191 cri.go:89] found id: ""
	I0816 00:35:45.258223   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.258232   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:45.258240   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:45.258297   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:45.294255   79191 cri.go:89] found id: ""
	I0816 00:35:45.294278   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.294286   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:45.294291   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:45.294335   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:45.329827   79191 cri.go:89] found id: ""
	I0816 00:35:45.329875   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.329886   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:45.329894   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:45.329944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:45.366095   79191 cri.go:89] found id: ""
	I0816 00:35:45.366124   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.366134   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:45.366141   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:45.366202   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:45.402367   79191 cri.go:89] found id: ""
	I0816 00:35:45.402390   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.402398   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:45.402403   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:45.402449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:45.439272   79191 cri.go:89] found id: ""
	I0816 00:35:45.439293   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.439300   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:45.439310   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:45.439358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:45.474351   79191 cri.go:89] found id: ""
	I0816 00:35:45.474380   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.474388   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:45.474393   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:45.474445   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:45.519636   79191 cri.go:89] found id: ""
	I0816 00:35:45.519661   79191 logs.go:276] 0 containers: []
	W0816 00:35:45.519671   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:45.519680   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:45.519695   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:45.593425   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:45.593446   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:45.593458   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:45.668058   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:45.668095   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:45.716090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:45.716125   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:45.774177   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:45.774207   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:42.495914   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:44.996641   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.426740   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.925719   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:45.376025   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:47.376628   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.876035   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:48.288893   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:48.302256   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:48.302321   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:48.337001   79191 cri.go:89] found id: ""
	I0816 00:35:48.337030   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.337041   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:48.337048   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:48.337110   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:48.378341   79191 cri.go:89] found id: ""
	I0816 00:35:48.378367   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.378375   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:48.378384   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:48.378447   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:48.414304   79191 cri.go:89] found id: ""
	I0816 00:35:48.414383   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.414402   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:48.414410   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:48.414473   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:48.453946   79191 cri.go:89] found id: ""
	I0816 00:35:48.453969   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.453976   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:48.453982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:48.454036   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:48.489597   79191 cri.go:89] found id: ""
	I0816 00:35:48.489617   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.489623   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:48.489629   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:48.489672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:48.524195   79191 cri.go:89] found id: ""
	I0816 00:35:48.524222   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.524232   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:48.524239   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:48.524293   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:48.567854   79191 cri.go:89] found id: ""
	I0816 00:35:48.567880   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.567890   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:48.567897   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:48.567956   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:48.603494   79191 cri.go:89] found id: ""
	I0816 00:35:48.603520   79191 logs.go:276] 0 containers: []
	W0816 00:35:48.603530   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:48.603540   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:48.603556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:48.642927   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:48.642960   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:48.693761   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:48.693791   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:48.708790   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:48.708818   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:48.780072   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:48.780092   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:48.780106   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.362108   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:51.376113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:51.376185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:51.413988   79191 cri.go:89] found id: ""
	I0816 00:35:51.414022   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.414033   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:51.414041   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:51.414101   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:51.460901   79191 cri.go:89] found id: ""
	I0816 00:35:51.460937   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.460948   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:51.460956   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:51.461019   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:51.497178   79191 cri.go:89] found id: ""
	I0816 00:35:51.497205   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.497215   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:51.497223   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:51.497365   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:51.534559   79191 cri.go:89] found id: ""
	I0816 00:35:51.534589   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.534600   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:51.534607   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:51.534668   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:51.570258   79191 cri.go:89] found id: ""
	I0816 00:35:51.570280   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.570287   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:51.570293   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:51.570356   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:51.609639   79191 cri.go:89] found id: ""
	I0816 00:35:51.609665   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.609675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:51.609683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:51.609742   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:51.645629   79191 cri.go:89] found id: ""
	I0816 00:35:51.645652   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.645659   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:51.645664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:51.645731   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:51.683325   79191 cri.go:89] found id: ""
	I0816 00:35:51.683344   79191 logs.go:276] 0 containers: []
	W0816 00:35:51.683351   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:51.683358   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:51.683369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:51.739101   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:51.739133   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:51.753436   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:51.753466   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:35:47.494904   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.495416   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:49.926975   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:51.928318   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:52.376854   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.880623   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:35:51.831242   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:51.831268   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:51.831294   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:51.926924   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:51.926970   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:54.472667   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:54.486706   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:54.486785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:54.524180   79191 cri.go:89] found id: ""
	I0816 00:35:54.524203   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.524211   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:54.524216   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:54.524273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:54.563758   79191 cri.go:89] found id: ""
	I0816 00:35:54.563781   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.563788   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:54.563795   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:54.563859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:54.599442   79191 cri.go:89] found id: ""
	I0816 00:35:54.599471   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.599481   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:54.599488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:54.599553   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:54.633521   79191 cri.go:89] found id: ""
	I0816 00:35:54.633547   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.633558   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:54.633565   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:54.633628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:54.670036   79191 cri.go:89] found id: ""
	I0816 00:35:54.670064   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.670075   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:54.670083   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:54.670148   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:54.707565   79191 cri.go:89] found id: ""
	I0816 00:35:54.707587   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.707594   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:54.707600   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:54.707659   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:54.744500   79191 cri.go:89] found id: ""
	I0816 00:35:54.744530   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.744541   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:54.744548   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:54.744612   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:54.778964   79191 cri.go:89] found id: ""
	I0816 00:35:54.778988   79191 logs.go:276] 0 containers: []
	W0816 00:35:54.778995   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:54.779007   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:54.779020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:35:54.831806   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:54.831838   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:54.845954   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:54.845979   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:54.921817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:54.921855   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:54.921871   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:55.006401   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:55.006439   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:51.996591   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.495673   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:54.427044   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:56.927184   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.375410   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.376333   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:57.548661   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:35:57.562489   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:35:57.562549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:35:57.597855   79191 cri.go:89] found id: ""
	I0816 00:35:57.597881   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.597891   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:35:57.597899   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:35:57.597961   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:35:57.634085   79191 cri.go:89] found id: ""
	I0816 00:35:57.634114   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.634126   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:35:57.634133   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:35:57.634193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:35:57.671748   79191 cri.go:89] found id: ""
	I0816 00:35:57.671779   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.671788   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:35:57.671795   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:35:57.671859   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:35:57.708836   79191 cri.go:89] found id: ""
	I0816 00:35:57.708862   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.708870   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:35:57.708877   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:35:57.708940   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:35:57.744601   79191 cri.go:89] found id: ""
	I0816 00:35:57.744630   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.744639   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:35:57.744645   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:35:57.744706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:35:57.781888   79191 cri.go:89] found id: ""
	I0816 00:35:57.781919   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.781929   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:35:57.781937   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:35:57.781997   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:35:57.822612   79191 cri.go:89] found id: ""
	I0816 00:35:57.822634   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.822641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:35:57.822647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:35:57.822706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:35:57.873968   79191 cri.go:89] found id: ""
	I0816 00:35:57.873998   79191 logs.go:276] 0 containers: []
	W0816 00:35:57.874008   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:35:57.874019   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:35:57.874037   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:35:57.896611   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:35:57.896643   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:35:57.995575   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:57.995597   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:35:57.995612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:35:58.077196   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:35:58.077230   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:35:58.116956   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:35:58.116985   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:00.664805   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:00.678425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:00.678501   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:00.715522   79191 cri.go:89] found id: ""
	I0816 00:36:00.715548   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.715557   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:00.715562   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:00.715608   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:00.749892   79191 cri.go:89] found id: ""
	I0816 00:36:00.749920   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.749931   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:00.749938   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:00.750006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:00.787302   79191 cri.go:89] found id: ""
	I0816 00:36:00.787325   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.787332   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:00.787338   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:00.787392   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:00.821866   79191 cri.go:89] found id: ""
	I0816 00:36:00.821894   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.821906   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:00.821914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:00.821971   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:00.856346   79191 cri.go:89] found id: ""
	I0816 00:36:00.856369   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.856377   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:00.856382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:00.856431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:00.893569   79191 cri.go:89] found id: ""
	I0816 00:36:00.893596   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.893606   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:00.893614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:00.893677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:00.930342   79191 cri.go:89] found id: ""
	I0816 00:36:00.930367   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.930378   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:00.930386   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:00.930622   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:00.966039   79191 cri.go:89] found id: ""
	I0816 00:36:00.966071   79191 logs.go:276] 0 containers: []
	W0816 00:36:00.966085   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:00.966095   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:00.966109   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:01.045594   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:01.045631   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:01.089555   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:01.089586   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:01.141597   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:01.141633   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:01.156260   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:01.156286   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:01.230573   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:35:56.995077   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:58.995897   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.495116   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:35:59.426099   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.926011   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.927327   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:01.376842   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.875993   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:03.730825   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:03.744766   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:03.744838   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:03.781095   79191 cri.go:89] found id: ""
	I0816 00:36:03.781124   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.781142   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:03.781150   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:03.781215   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:03.815637   79191 cri.go:89] found id: ""
	I0816 00:36:03.815669   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.815680   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:03.815687   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:03.815741   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:03.850076   79191 cri.go:89] found id: ""
	I0816 00:36:03.850110   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.850122   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:03.850130   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:03.850185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:03.888840   79191 cri.go:89] found id: ""
	I0816 00:36:03.888863   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.888872   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:03.888879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:03.888941   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:03.928317   79191 cri.go:89] found id: ""
	I0816 00:36:03.928341   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.928350   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:03.928359   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:03.928413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:03.964709   79191 cri.go:89] found id: ""
	I0816 00:36:03.964741   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.964751   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:03.964760   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:03.964830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:03.999877   79191 cri.go:89] found id: ""
	I0816 00:36:03.999902   79191 logs.go:276] 0 containers: []
	W0816 00:36:03.999912   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:03.999919   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:03.999981   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:04.036772   79191 cri.go:89] found id: ""
	I0816 00:36:04.036799   79191 logs.go:276] 0 containers: []
	W0816 00:36:04.036810   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:04.036820   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:04.036833   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:04.118843   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:04.118879   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:04.162491   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:04.162548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:04.215100   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:04.215134   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:04.229043   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:04.229069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:04.307480   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:03.495661   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.995711   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.426223   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:08.426470   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:05.876718   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:07.877431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:06.807640   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:06.821144   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:06.821203   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:06.857743   79191 cri.go:89] found id: ""
	I0816 00:36:06.857776   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.857786   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:06.857794   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:06.857872   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:06.895980   79191 cri.go:89] found id: ""
	I0816 00:36:06.896007   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.896018   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:06.896025   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:06.896090   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:06.935358   79191 cri.go:89] found id: ""
	I0816 00:36:06.935389   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.935399   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:06.935406   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:06.935461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:06.971533   79191 cri.go:89] found id: ""
	I0816 00:36:06.971561   79191 logs.go:276] 0 containers: []
	W0816 00:36:06.971572   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:06.971580   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:06.971640   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:07.007786   79191 cri.go:89] found id: ""
	I0816 00:36:07.007812   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.007823   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:07.007830   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:07.007890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:07.044060   79191 cri.go:89] found id: ""
	I0816 00:36:07.044092   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.044104   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:07.044112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:07.044185   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:07.080058   79191 cri.go:89] found id: ""
	I0816 00:36:07.080085   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.080094   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:07.080101   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:07.080156   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:07.117749   79191 cri.go:89] found id: ""
	I0816 00:36:07.117773   79191 logs.go:276] 0 containers: []
	W0816 00:36:07.117780   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:07.117787   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:07.117799   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:07.171418   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:07.171453   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:07.185520   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:07.185542   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:07.257817   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:07.257872   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:07.257888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:07.339530   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:07.339576   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:09.882613   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:09.895873   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:09.895950   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:09.936739   79191 cri.go:89] found id: ""
	I0816 00:36:09.936766   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.936774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:09.936780   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:09.936836   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:09.974145   79191 cri.go:89] found id: ""
	I0816 00:36:09.974168   79191 logs.go:276] 0 containers: []
	W0816 00:36:09.974180   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:09.974186   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:09.974243   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:10.012166   79191 cri.go:89] found id: ""
	I0816 00:36:10.012196   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.012206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:10.012214   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:10.012265   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:10.051080   79191 cri.go:89] found id: ""
	I0816 00:36:10.051103   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.051111   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:10.051117   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:10.051176   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:10.088519   79191 cri.go:89] found id: ""
	I0816 00:36:10.088548   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.088559   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:10.088567   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:10.088628   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:10.123718   79191 cri.go:89] found id: ""
	I0816 00:36:10.123744   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.123752   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:10.123758   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:10.123805   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:10.161900   79191 cri.go:89] found id: ""
	I0816 00:36:10.161922   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.161929   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:10.161995   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:10.162064   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:10.196380   79191 cri.go:89] found id: ""
	I0816 00:36:10.196408   79191 logs.go:276] 0 containers: []
	W0816 00:36:10.196419   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:10.196429   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:10.196443   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:10.248276   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:10.248309   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:10.262241   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:10.262269   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:10.340562   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:10.340598   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:10.340626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:10.417547   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:10.417578   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:07.996930   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:09.997666   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.426502   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.426976   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:10.377172   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.877236   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:12.962310   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:12.976278   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:12.976338   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:13.014501   79191 cri.go:89] found id: ""
	I0816 00:36:13.014523   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.014530   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:13.014536   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:13.014587   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:13.055942   79191 cri.go:89] found id: ""
	I0816 00:36:13.055970   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.055979   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:13.055987   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:13.056048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:13.090309   79191 cri.go:89] found id: ""
	I0816 00:36:13.090336   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.090346   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:13.090354   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:13.090413   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:13.124839   79191 cri.go:89] found id: ""
	I0816 00:36:13.124865   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.124876   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:13.124884   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:13.124945   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:13.164535   79191 cri.go:89] found id: ""
	I0816 00:36:13.164560   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.164567   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:13.164573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:13.164630   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:13.198651   79191 cri.go:89] found id: ""
	I0816 00:36:13.198699   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.198710   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:13.198718   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:13.198785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:13.233255   79191 cri.go:89] found id: ""
	I0816 00:36:13.233278   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.233286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:13.233292   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:13.233348   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:13.267327   79191 cri.go:89] found id: ""
	I0816 00:36:13.267351   79191 logs.go:276] 0 containers: []
	W0816 00:36:13.267359   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:13.267367   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:13.267384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:13.352053   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:13.352089   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:13.393438   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:13.393471   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:13.445397   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:13.445430   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:13.459143   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:13.459177   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:13.530160   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.031296   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:16.045557   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:16.045618   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:16.081828   79191 cri.go:89] found id: ""
	I0816 00:36:16.081871   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.081882   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:16.081890   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:16.081949   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:16.116228   79191 cri.go:89] found id: ""
	I0816 00:36:16.116254   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.116264   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:16.116272   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:16.116334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:16.150051   79191 cri.go:89] found id: ""
	I0816 00:36:16.150079   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.150087   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:16.150093   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:16.150139   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:16.186218   79191 cri.go:89] found id: ""
	I0816 00:36:16.186241   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.186248   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:16.186254   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:16.186301   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:16.223223   79191 cri.go:89] found id: ""
	I0816 00:36:16.223255   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.223263   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:16.223270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:16.223316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:16.259929   79191 cri.go:89] found id: ""
	I0816 00:36:16.259953   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.259960   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:16.259970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:16.260099   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:16.294611   79191 cri.go:89] found id: ""
	I0816 00:36:16.294633   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.294641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:16.294649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:16.294725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:16.333492   79191 cri.go:89] found id: ""
	I0816 00:36:16.333523   79191 logs.go:276] 0 containers: []
	W0816 00:36:16.333533   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:16.333544   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:16.333563   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:16.385970   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:16.386002   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:16.400359   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:16.400384   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:16.471363   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:16.471388   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:16.471408   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:16.555990   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:16.556022   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:12.495406   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.995145   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:14.926160   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.426768   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:15.376672   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:17.876395   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.876542   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.099502   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:19.112649   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:19.112706   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:19.145809   79191 cri.go:89] found id: ""
	I0816 00:36:19.145837   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.145858   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:19.145865   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:19.145928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:19.183737   79191 cri.go:89] found id: ""
	I0816 00:36:19.183763   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.183774   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:19.183781   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:19.183841   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:19.219729   79191 cri.go:89] found id: ""
	I0816 00:36:19.219756   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.219764   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:19.219770   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:19.219815   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:19.254450   79191 cri.go:89] found id: ""
	I0816 00:36:19.254474   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.254481   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:19.254488   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:19.254540   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:19.289543   79191 cri.go:89] found id: ""
	I0816 00:36:19.289573   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.289585   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:19.289592   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:19.289651   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:19.330727   79191 cri.go:89] found id: ""
	I0816 00:36:19.330748   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.330756   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:19.330762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:19.330809   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:19.368952   79191 cri.go:89] found id: ""
	I0816 00:36:19.368978   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.368986   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:19.368992   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:19.369048   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:19.406211   79191 cri.go:89] found id: ""
	I0816 00:36:19.406247   79191 logs.go:276] 0 containers: []
	W0816 00:36:19.406258   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:19.406268   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:19.406282   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:19.457996   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:19.458032   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:19.472247   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:19.472274   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:19.542840   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:19.542862   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:19.542876   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:19.624478   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:19.624520   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:16.997148   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.496434   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:19.427251   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:21.925550   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.925858   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.376318   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:24.376431   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:22.165884   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:22.180005   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:22.180078   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:22.217434   79191 cri.go:89] found id: ""
	I0816 00:36:22.217463   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.217471   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:22.217478   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:22.217534   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:22.250679   79191 cri.go:89] found id: ""
	I0816 00:36:22.250708   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.250717   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:22.250725   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:22.250785   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:22.284294   79191 cri.go:89] found id: ""
	I0816 00:36:22.284324   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.284334   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:22.284341   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:22.284403   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:22.320747   79191 cri.go:89] found id: ""
	I0816 00:36:22.320779   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.320790   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:22.320799   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:22.320858   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:22.355763   79191 cri.go:89] found id: ""
	I0816 00:36:22.355793   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.355803   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:22.355811   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:22.355871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:22.392762   79191 cri.go:89] found id: ""
	I0816 00:36:22.392788   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.392796   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:22.392802   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:22.392860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:22.426577   79191 cri.go:89] found id: ""
	I0816 00:36:22.426605   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.426614   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:22.426621   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:22.426682   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:22.459989   79191 cri.go:89] found id: ""
	I0816 00:36:22.460018   79191 logs.go:276] 0 containers: []
	W0816 00:36:22.460030   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:22.460040   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:22.460054   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:22.545782   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:22.545820   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:22.587404   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:22.587431   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:22.638519   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:22.638559   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:22.653064   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:22.653087   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:22.734333   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.234823   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:25.248716   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:25.248787   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:25.284760   79191 cri.go:89] found id: ""
	I0816 00:36:25.284786   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.284793   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:25.284799   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:25.284870   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:25.325523   79191 cri.go:89] found id: ""
	I0816 00:36:25.325548   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.325556   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:25.325562   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:25.325621   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:25.365050   79191 cri.go:89] found id: ""
	I0816 00:36:25.365078   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.365088   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:25.365096   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:25.365155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:25.405005   79191 cri.go:89] found id: ""
	I0816 00:36:25.405038   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.405049   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:25.405062   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:25.405121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:25.444622   79191 cri.go:89] found id: ""
	I0816 00:36:25.444648   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.444656   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:25.444662   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:25.444710   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:25.485364   79191 cri.go:89] found id: ""
	I0816 00:36:25.485394   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.485404   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:25.485413   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:25.485492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:25.521444   79191 cri.go:89] found id: ""
	I0816 00:36:25.521471   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.521482   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:25.521490   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:25.521550   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:25.556763   79191 cri.go:89] found id: ""
	I0816 00:36:25.556789   79191 logs.go:276] 0 containers: []
	W0816 00:36:25.556796   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:25.556805   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:25.556817   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:25.606725   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:25.606759   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:25.623080   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:25.623108   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:25.705238   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:25.705258   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:25.705280   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:25.782188   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:25.782224   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:21.994519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:23.995061   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.494442   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:25.926835   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.427012   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:26.876206   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.876563   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:28.325018   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:28.337778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:28.337860   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:28.378452   79191 cri.go:89] found id: ""
	I0816 00:36:28.378482   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.378492   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:28.378499   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:28.378556   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:28.412103   79191 cri.go:89] found id: ""
	I0816 00:36:28.412132   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.412143   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:28.412150   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:28.412214   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:28.447363   79191 cri.go:89] found id: ""
	I0816 00:36:28.447388   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.447396   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:28.447401   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:28.447452   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:28.481199   79191 cri.go:89] found id: ""
	I0816 00:36:28.481228   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.481242   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:28.481251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:28.481305   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:28.517523   79191 cri.go:89] found id: ""
	I0816 00:36:28.517545   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.517552   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:28.517558   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:28.517620   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:28.552069   79191 cri.go:89] found id: ""
	I0816 00:36:28.552101   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.552112   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:28.552120   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:28.552193   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:28.594124   79191 cri.go:89] found id: ""
	I0816 00:36:28.594148   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.594158   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:28.594166   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:28.594228   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:28.631451   79191 cri.go:89] found id: ""
	I0816 00:36:28.631472   79191 logs.go:276] 0 containers: []
	W0816 00:36:28.631480   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:28.631488   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:28.631498   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:28.685335   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:28.685368   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:28.700852   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:28.700877   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:28.773932   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:28.773957   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:28.773972   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:28.848951   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:28.848989   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:31.389208   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:31.403731   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:31.403813   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:31.440979   79191 cri.go:89] found id: ""
	I0816 00:36:31.441010   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.441020   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:31.441028   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:31.441092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:31.476435   79191 cri.go:89] found id: ""
	I0816 00:36:31.476458   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.476465   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:31.476471   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:31.476530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:31.514622   79191 cri.go:89] found id: ""
	I0816 00:36:31.514644   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.514651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:31.514657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:31.514715   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:31.554503   79191 cri.go:89] found id: ""
	I0816 00:36:31.554533   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.554543   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:31.554551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:31.554609   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:31.590283   79191 cri.go:89] found id: ""
	I0816 00:36:31.590317   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.590325   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:31.590332   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:31.590380   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:31.625969   79191 cri.go:89] found id: ""
	I0816 00:36:31.626003   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.626014   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:31.626031   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:31.626102   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:31.660489   79191 cri.go:89] found id: ""
	I0816 00:36:31.660513   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.660520   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:31.660526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:31.660583   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:31.694728   79191 cri.go:89] found id: ""
	I0816 00:36:31.694761   79191 logs.go:276] 0 containers: []
	W0816 00:36:31.694769   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:31.694779   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:31.694790   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:31.760631   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:31.760663   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:31.774858   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:31.774886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:36:28.994228   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.994276   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.926313   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.426045   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:30.877175   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:33.378602   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	W0816 00:36:31.851125   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:31.851145   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:31.851156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:31.934491   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:31.934521   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:34.476368   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:34.489252   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:34.489308   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:34.524932   79191 cri.go:89] found id: ""
	I0816 00:36:34.524964   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.524972   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:34.524977   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:34.525032   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:34.559434   79191 cri.go:89] found id: ""
	I0816 00:36:34.559462   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.559473   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:34.559481   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:34.559543   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:34.598700   79191 cri.go:89] found id: ""
	I0816 00:36:34.598728   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.598739   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:34.598747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:34.598808   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:34.632413   79191 cri.go:89] found id: ""
	I0816 00:36:34.632438   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.632448   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:34.632456   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:34.632514   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:34.668385   79191 cri.go:89] found id: ""
	I0816 00:36:34.668409   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.668418   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:34.668425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:34.668486   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:34.703728   79191 cri.go:89] found id: ""
	I0816 00:36:34.703754   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.703764   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:34.703772   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:34.703832   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:34.743119   79191 cri.go:89] found id: ""
	I0816 00:36:34.743152   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.743161   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:34.743171   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:34.743230   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:34.778932   79191 cri.go:89] found id: ""
	I0816 00:36:34.778955   79191 logs.go:276] 0 containers: []
	W0816 00:36:34.778963   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:34.778971   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:34.778987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:34.832050   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:34.832084   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:34.845700   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:34.845728   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:34.917535   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:34.917554   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:34.917565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:35.005262   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:35.005295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:32.994435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:34.994503   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.926422   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:35.876400   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:38.376351   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:37.547107   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:37.562035   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:37.562095   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:37.605992   79191 cri.go:89] found id: ""
	I0816 00:36:37.606021   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.606028   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:37.606035   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:37.606092   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:37.642613   79191 cri.go:89] found id: ""
	I0816 00:36:37.642642   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.642653   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:37.642660   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:37.642708   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:37.677810   79191 cri.go:89] found id: ""
	I0816 00:36:37.677863   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.677875   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:37.677883   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:37.677939   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:37.714490   79191 cri.go:89] found id: ""
	I0816 00:36:37.714514   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.714522   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:37.714529   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:37.714575   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:37.750807   79191 cri.go:89] found id: ""
	I0816 00:36:37.750837   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.750844   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:37.750850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:37.750912   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:37.790307   79191 cri.go:89] found id: ""
	I0816 00:36:37.790337   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.790347   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:37.790355   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:37.790404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:37.826811   79191 cri.go:89] found id: ""
	I0816 00:36:37.826838   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.826848   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:37.826856   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:37.826920   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:37.862066   79191 cri.go:89] found id: ""
	I0816 00:36:37.862091   79191 logs.go:276] 0 containers: []
	W0816 00:36:37.862101   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:37.862112   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:37.862127   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:37.917127   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:37.917161   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:37.932986   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:37.933024   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:38.008715   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:38.008739   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:38.008754   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:38.088744   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:38.088778   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:40.643426   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:40.659064   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:40.659128   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:40.702486   79191 cri.go:89] found id: ""
	I0816 00:36:40.702513   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.702523   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:40.702530   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:40.702595   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:40.736016   79191 cri.go:89] found id: ""
	I0816 00:36:40.736044   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.736057   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:40.736064   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:40.736125   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:40.779665   79191 cri.go:89] found id: ""
	I0816 00:36:40.779704   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.779724   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:40.779733   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:40.779795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:40.818612   79191 cri.go:89] found id: ""
	I0816 00:36:40.818633   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.818640   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:40.818647   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:40.818695   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:40.855990   79191 cri.go:89] found id: ""
	I0816 00:36:40.856014   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.856021   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:40.856027   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:40.856074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:40.894792   79191 cri.go:89] found id: ""
	I0816 00:36:40.894827   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.894836   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:40.894845   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:40.894894   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:40.932233   79191 cri.go:89] found id: ""
	I0816 00:36:40.932255   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.932263   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:40.932268   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:40.932324   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:40.974601   79191 cri.go:89] found id: ""
	I0816 00:36:40.974624   79191 logs.go:276] 0 containers: []
	W0816 00:36:40.974633   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:40.974642   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:40.974660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:41.049185   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:41.049209   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:41.049223   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:41.129446   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:41.129481   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:41.170312   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:41.170341   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:41.226217   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:41.226254   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:36.995268   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:39.494273   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:41.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.426501   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.926122   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:40.877227   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:42.878644   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:43.741485   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:43.756248   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:43.756325   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:43.792440   79191 cri.go:89] found id: ""
	I0816 00:36:43.792469   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.792480   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:43.792488   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:43.792549   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:43.829906   79191 cri.go:89] found id: ""
	I0816 00:36:43.829933   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.829941   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:43.829947   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:43.830003   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:43.880305   79191 cri.go:89] found id: ""
	I0816 00:36:43.880330   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.880337   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:43.880343   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:43.880399   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:43.937899   79191 cri.go:89] found id: ""
	I0816 00:36:43.937929   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.937939   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:43.937953   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:43.938023   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:43.997578   79191 cri.go:89] found id: ""
	I0816 00:36:43.997603   79191 logs.go:276] 0 containers: []
	W0816 00:36:43.997610   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:43.997620   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:43.997672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:44.035606   79191 cri.go:89] found id: ""
	I0816 00:36:44.035629   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.035637   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:44.035643   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:44.035692   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:44.072919   79191 cri.go:89] found id: ""
	I0816 00:36:44.072950   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.072961   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:44.072968   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:44.073043   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:44.108629   79191 cri.go:89] found id: ""
	I0816 00:36:44.108659   79191 logs.go:276] 0 containers: []
	W0816 00:36:44.108681   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:44.108692   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:44.108705   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:44.149127   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:44.149151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:44.201694   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:44.201737   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:44.217161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:44.217199   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:44.284335   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:44.284362   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:44.284379   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:43.996478   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.494382   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:44.926542   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.926713   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:45.376030   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:47.875418   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.877201   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:46.869196   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:46.883519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:46.883584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:46.924767   79191 cri.go:89] found id: ""
	I0816 00:36:46.924806   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.924821   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:46.924829   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:46.924889   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:46.963282   79191 cri.go:89] found id: ""
	I0816 00:36:46.963309   79191 logs.go:276] 0 containers: []
	W0816 00:36:46.963320   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:46.963327   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:46.963389   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:47.001421   79191 cri.go:89] found id: ""
	I0816 00:36:47.001450   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.001458   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:47.001463   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:47.001518   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:47.037679   79191 cri.go:89] found id: ""
	I0816 00:36:47.037702   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.037713   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:47.037720   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:47.037778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:47.078009   79191 cri.go:89] found id: ""
	I0816 00:36:47.078039   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.078050   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:47.078056   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:47.078113   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:47.119032   79191 cri.go:89] found id: ""
	I0816 00:36:47.119056   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.119064   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:47.119069   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:47.119127   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:47.154893   79191 cri.go:89] found id: ""
	I0816 00:36:47.154919   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.154925   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:47.154933   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:47.154993   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:47.194544   79191 cri.go:89] found id: ""
	I0816 00:36:47.194571   79191 logs.go:276] 0 containers: []
	W0816 00:36:47.194582   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:47.194592   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:47.194612   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:47.267148   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:47.267172   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:47.267186   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:47.345257   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:47.345295   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:47.386207   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:47.386233   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:47.436171   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:47.436201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:49.949977   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:49.965702   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:49.965761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:50.002443   79191 cri.go:89] found id: ""
	I0816 00:36:50.002470   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.002481   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:50.002489   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:50.002548   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:50.039123   79191 cri.go:89] found id: ""
	I0816 00:36:50.039155   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.039162   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:50.039168   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:50.039220   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:50.074487   79191 cri.go:89] found id: ""
	I0816 00:36:50.074517   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.074527   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:50.074535   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:50.074593   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:50.108980   79191 cri.go:89] found id: ""
	I0816 00:36:50.109008   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.109018   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:50.109025   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:50.109082   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:50.149182   79191 cri.go:89] found id: ""
	I0816 00:36:50.149202   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.149209   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:50.149215   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:50.149261   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:50.183066   79191 cri.go:89] found id: ""
	I0816 00:36:50.183094   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.183102   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:50.183108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:50.183165   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:50.220200   79191 cri.go:89] found id: ""
	I0816 00:36:50.220231   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.220240   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:50.220246   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:50.220302   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:50.258059   79191 cri.go:89] found id: ""
	I0816 00:36:50.258083   79191 logs.go:276] 0 containers: []
	W0816 00:36:50.258092   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:50.258100   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:50.258110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:50.300560   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:50.300591   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:50.350548   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:50.350581   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:50.364792   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:50.364816   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:50.437723   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:50.437746   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:50.437761   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:48.995009   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:50.995542   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:49.425926   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:51.427896   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.926363   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:52.375826   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:54.876435   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:53.015846   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:53.029184   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:53.029246   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:53.064306   79191 cri.go:89] found id: ""
	I0816 00:36:53.064338   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.064346   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:53.064352   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:53.064404   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:53.104425   79191 cri.go:89] found id: ""
	I0816 00:36:53.104458   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.104468   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:53.104476   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:53.104538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:53.139470   79191 cri.go:89] found id: ""
	I0816 00:36:53.139493   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.139500   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:53.139506   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:53.139551   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:53.185195   79191 cri.go:89] found id: ""
	I0816 00:36:53.185225   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.185234   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:53.185242   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:53.185300   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:53.221897   79191 cri.go:89] found id: ""
	I0816 00:36:53.221925   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.221935   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:53.221943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:53.222006   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:53.258810   79191 cri.go:89] found id: ""
	I0816 00:36:53.258841   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.258852   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:53.258859   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:53.258924   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:53.298672   79191 cri.go:89] found id: ""
	I0816 00:36:53.298701   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.298711   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:53.298719   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:53.298778   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:53.333498   79191 cri.go:89] found id: ""
	I0816 00:36:53.333520   79191 logs.go:276] 0 containers: []
	W0816 00:36:53.333527   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:53.333535   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:53.333548   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.370495   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:53.370530   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:53.423938   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:53.423982   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:53.438897   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:53.438926   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:53.505951   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:53.505973   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:53.505987   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.089638   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:56.103832   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:56.103893   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:56.148010   79191 cri.go:89] found id: ""
	I0816 00:36:56.148038   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.148048   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:56.148057   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:56.148120   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:56.185631   79191 cri.go:89] found id: ""
	I0816 00:36:56.185663   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.185673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:56.185680   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:56.185739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:56.222064   79191 cri.go:89] found id: ""
	I0816 00:36:56.222093   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.222104   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:56.222112   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:56.222162   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:56.260462   79191 cri.go:89] found id: ""
	I0816 00:36:56.260494   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.260504   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:56.260513   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:56.260574   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:56.296125   79191 cri.go:89] found id: ""
	I0816 00:36:56.296154   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.296164   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:56.296172   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:56.296236   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:56.333278   79191 cri.go:89] found id: ""
	I0816 00:36:56.333305   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.333316   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:56.333324   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:56.333385   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:56.368924   79191 cri.go:89] found id: ""
	I0816 00:36:56.368952   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.368962   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:56.368970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:56.369034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:56.407148   79191 cri.go:89] found id: ""
	I0816 00:36:56.407180   79191 logs.go:276] 0 containers: []
	W0816 00:36:56.407190   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:56.407201   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:56.407215   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:56.464745   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:56.464779   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:56.478177   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:56.478204   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:56.555827   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:56.555851   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:56.555864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:56.640001   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:56.640040   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:53.495546   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.994786   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:55.926541   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:58.426865   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:57.376484   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.876765   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:36:59.181423   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:36:59.195722   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:36:59.195804   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:36:59.232043   79191 cri.go:89] found id: ""
	I0816 00:36:59.232067   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.232075   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:36:59.232081   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:36:59.232132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:36:59.270628   79191 cri.go:89] found id: ""
	I0816 00:36:59.270656   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.270673   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:36:59.270681   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:36:59.270743   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:36:59.304054   79191 cri.go:89] found id: ""
	I0816 00:36:59.304089   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.304100   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:36:59.304108   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:36:59.304169   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:36:59.339386   79191 cri.go:89] found id: ""
	I0816 00:36:59.339410   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.339417   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:36:59.339423   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:36:59.339483   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:36:59.381313   79191 cri.go:89] found id: ""
	I0816 00:36:59.381361   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.381376   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:36:59.381385   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:36:59.381449   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:36:59.417060   79191 cri.go:89] found id: ""
	I0816 00:36:59.417090   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.417101   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:36:59.417109   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:36:59.417160   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:36:59.461034   79191 cri.go:89] found id: ""
	I0816 00:36:59.461060   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.461071   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:36:59.461078   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:36:59.461136   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:36:59.496248   79191 cri.go:89] found id: ""
	I0816 00:36:59.496276   79191 logs.go:276] 0 containers: []
	W0816 00:36:59.496286   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:36:59.496297   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:36:59.496312   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:36:59.566779   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:36:59.566803   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:36:59.566829   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:36:59.651999   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:36:59.652034   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:36:59.693286   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:36:59.693310   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:36:59.746677   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:36:59.746711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:36:58.494370   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.494959   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:00.927036   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:03.425008   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.376921   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.876676   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:02.262527   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:02.277903   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:02.277965   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:02.323846   79191 cri.go:89] found id: ""
	I0816 00:37:02.323868   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.323876   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:02.323882   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:02.323938   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:02.359552   79191 cri.go:89] found id: ""
	I0816 00:37:02.359578   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.359589   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:02.359596   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:02.359657   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:02.395062   79191 cri.go:89] found id: ""
	I0816 00:37:02.395087   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.395094   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:02.395100   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:02.395155   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:02.432612   79191 cri.go:89] found id: ""
	I0816 00:37:02.432636   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.432646   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:02.432654   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:02.432712   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:02.468612   79191 cri.go:89] found id: ""
	I0816 00:37:02.468640   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.468651   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:02.468659   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:02.468716   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:02.514472   79191 cri.go:89] found id: ""
	I0816 00:37:02.514500   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.514511   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:02.514519   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:02.514576   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:02.551964   79191 cri.go:89] found id: ""
	I0816 00:37:02.551993   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.552003   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:02.552011   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:02.552061   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:02.588018   79191 cri.go:89] found id: ""
	I0816 00:37:02.588044   79191 logs.go:276] 0 containers: []
	W0816 00:37:02.588053   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:02.588063   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:02.588081   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:02.638836   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:02.638875   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:02.653581   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:02.653613   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:02.737018   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.737047   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:02.737065   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:02.819726   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:02.819763   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.364943   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:05.379433   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:05.379492   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:05.419165   79191 cri.go:89] found id: ""
	I0816 00:37:05.419191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.419198   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:05.419204   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:05.419264   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:05.454417   79191 cri.go:89] found id: ""
	I0816 00:37:05.454438   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.454446   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:05.454452   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:05.454497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:05.490162   79191 cri.go:89] found id: ""
	I0816 00:37:05.490191   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.490203   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:05.490210   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:05.490268   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:05.527303   79191 cri.go:89] found id: ""
	I0816 00:37:05.527327   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.527334   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:05.527340   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:05.527393   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:05.562271   79191 cri.go:89] found id: ""
	I0816 00:37:05.562302   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.562310   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:05.562316   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:05.562374   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:05.597800   79191 cri.go:89] found id: ""
	I0816 00:37:05.597823   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.597830   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:05.597837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:05.597905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:05.633996   79191 cri.go:89] found id: ""
	I0816 00:37:05.634021   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.634028   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:05.634034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:05.634088   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:05.672408   79191 cri.go:89] found id: ""
	I0816 00:37:05.672437   79191 logs.go:276] 0 containers: []
	W0816 00:37:05.672446   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:05.672457   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:05.672472   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:05.750956   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:05.750995   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:05.795573   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:05.795603   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:05.848560   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:05.848593   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:05.862245   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:05.862268   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:05.938704   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:02.495728   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:04.994839   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:05.425507   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:07.426459   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:06.877664   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.375601   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:08.439692   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:08.452850   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:08.452927   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:08.490015   79191 cri.go:89] found id: ""
	I0816 00:37:08.490043   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.490053   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:08.490060   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:08.490121   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:08.529631   79191 cri.go:89] found id: ""
	I0816 00:37:08.529665   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.529676   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:08.529689   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:08.529747   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:08.564858   79191 cri.go:89] found id: ""
	I0816 00:37:08.564885   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.564896   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:08.564904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:08.564966   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:08.601144   79191 cri.go:89] found id: ""
	I0816 00:37:08.601180   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.601190   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:08.601200   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:08.601257   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:08.637050   79191 cri.go:89] found id: ""
	I0816 00:37:08.637081   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.637090   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:08.637098   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:08.637158   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:08.670613   79191 cri.go:89] found id: ""
	I0816 00:37:08.670644   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.670655   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:08.670663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:08.670727   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:08.704664   79191 cri.go:89] found id: ""
	I0816 00:37:08.704690   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.704698   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:08.704704   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:08.704754   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:08.741307   79191 cri.go:89] found id: ""
	I0816 00:37:08.741337   79191 logs.go:276] 0 containers: []
	W0816 00:37:08.741348   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:08.741360   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:08.741374   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:08.755434   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:08.755459   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:08.828118   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:08.828140   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:08.828151   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:08.911565   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:08.911605   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:08.954907   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:08.954937   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.508848   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:11.521998   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:11.522060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:11.558581   79191 cri.go:89] found id: ""
	I0816 00:37:11.558611   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.558622   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:11.558630   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:11.558697   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:11.593798   79191 cri.go:89] found id: ""
	I0816 00:37:11.593822   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.593830   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:11.593836   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:11.593905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:11.629619   79191 cri.go:89] found id: ""
	I0816 00:37:11.629648   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.629658   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:11.629664   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:11.629717   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:11.666521   79191 cri.go:89] found id: ""
	I0816 00:37:11.666548   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.666556   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:11.666562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:11.666607   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:11.703374   79191 cri.go:89] found id: ""
	I0816 00:37:11.703406   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.703417   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:11.703427   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:11.703491   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:11.739374   79191 cri.go:89] found id: ""
	I0816 00:37:11.739403   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.739413   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:11.739420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:11.739475   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:11.774981   79191 cri.go:89] found id: ""
	I0816 00:37:11.775006   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.775013   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:11.775019   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:11.775074   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:06.995675   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.495024   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:09.926950   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:12.428179   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.377241   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.875723   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:11.809561   79191 cri.go:89] found id: ""
	I0816 00:37:11.809590   79191 logs.go:276] 0 containers: []
	W0816 00:37:11.809601   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:11.809612   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:11.809626   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:11.863071   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:11.863116   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:11.878161   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:11.878191   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:11.953572   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.953594   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:11.953608   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:12.035815   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:12.035848   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:14.576547   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:14.590747   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:14.590802   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:14.626732   79191 cri.go:89] found id: ""
	I0816 00:37:14.626762   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.626774   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:14.626781   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:14.626833   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:14.662954   79191 cri.go:89] found id: ""
	I0816 00:37:14.662978   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.662988   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:14.662996   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:14.663057   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:14.697618   79191 cri.go:89] found id: ""
	I0816 00:37:14.697646   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.697656   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:14.697663   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:14.697725   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:14.735137   79191 cri.go:89] found id: ""
	I0816 00:37:14.735161   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.735168   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:14.735174   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:14.735222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:14.770625   79191 cri.go:89] found id: ""
	I0816 00:37:14.770648   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.770655   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:14.770660   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:14.770718   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:14.808678   79191 cri.go:89] found id: ""
	I0816 00:37:14.808708   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.808718   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:14.808726   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:14.808795   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:14.847321   79191 cri.go:89] found id: ""
	I0816 00:37:14.847349   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.847360   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:14.847368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:14.847425   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:14.886110   79191 cri.go:89] found id: ""
	I0816 00:37:14.886136   79191 logs.go:276] 0 containers: []
	W0816 00:37:14.886147   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:14.886156   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:14.886175   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:14.971978   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:14.972013   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:15.015620   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:15.015644   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:15.067372   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:15.067405   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:15.081629   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:15.081652   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:15.151580   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:11.995551   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:13.995831   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.495016   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:14.926297   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:16.926367   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:18.927215   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:15.876514   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.877987   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:17.652362   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:17.666201   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:17.666278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:17.698723   79191 cri.go:89] found id: ""
	I0816 00:37:17.698760   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.698772   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:17.698778   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:17.698827   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:17.732854   79191 cri.go:89] found id: ""
	I0816 00:37:17.732883   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.732893   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:17.732901   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:17.732957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:17.767665   79191 cri.go:89] found id: ""
	I0816 00:37:17.767691   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.767701   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:17.767709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:17.767769   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:17.801490   79191 cri.go:89] found id: ""
	I0816 00:37:17.801512   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.801520   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:17.801526   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:17.801579   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:17.837451   79191 cri.go:89] found id: ""
	I0816 00:37:17.837479   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.837490   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:17.837498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:17.837562   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:17.872898   79191 cri.go:89] found id: ""
	I0816 00:37:17.872924   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.872934   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:17.872943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:17.873002   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:17.910325   79191 cri.go:89] found id: ""
	I0816 00:37:17.910352   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.910362   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:17.910370   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:17.910431   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:17.946885   79191 cri.go:89] found id: ""
	I0816 00:37:17.946909   79191 logs.go:276] 0 containers: []
	W0816 00:37:17.946916   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:17.946923   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:17.946935   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:18.014011   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:18.014045   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:18.028850   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:18.028886   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:18.099362   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:18.099381   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:18.099396   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:18.180552   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:18.180588   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:20.720810   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:20.733806   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:20.733887   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:20.771300   79191 cri.go:89] found id: ""
	I0816 00:37:20.771323   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.771330   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:20.771336   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:20.771394   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:20.812327   79191 cri.go:89] found id: ""
	I0816 00:37:20.812355   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.812362   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:20.812369   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:20.812430   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:20.846830   79191 cri.go:89] found id: ""
	I0816 00:37:20.846861   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.846872   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:20.846879   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:20.846948   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:20.889979   79191 cri.go:89] found id: ""
	I0816 00:37:20.890005   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.890015   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:20.890023   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:20.890086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:20.933732   79191 cri.go:89] found id: ""
	I0816 00:37:20.933762   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.933772   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:20.933778   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:20.933824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:20.972341   79191 cri.go:89] found id: ""
	I0816 00:37:20.972368   79191 logs.go:276] 0 containers: []
	W0816 00:37:20.972376   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:20.972382   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:20.972444   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:21.011179   79191 cri.go:89] found id: ""
	I0816 00:37:21.011207   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.011216   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:21.011224   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:21.011282   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:21.045645   79191 cri.go:89] found id: ""
	I0816 00:37:21.045668   79191 logs.go:276] 0 containers: []
	W0816 00:37:21.045675   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:21.045684   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:21.045694   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:21.099289   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:21.099321   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:21.113814   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:21.113858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:21.186314   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:21.186337   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:21.186355   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:21.271116   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:21.271152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:18.994476   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.996435   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:21.425187   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.425456   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:20.377999   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:22.877014   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:23.818598   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:23.832330   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:23.832387   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:23.869258   79191 cri.go:89] found id: ""
	I0816 00:37:23.869279   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.869286   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:23.869293   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:23.869342   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:23.903958   79191 cri.go:89] found id: ""
	I0816 00:37:23.903989   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.903999   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:23.904006   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:23.904060   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:23.943110   79191 cri.go:89] found id: ""
	I0816 00:37:23.943142   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.943153   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:23.943160   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:23.943222   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:23.979325   79191 cri.go:89] found id: ""
	I0816 00:37:23.979356   79191 logs.go:276] 0 containers: []
	W0816 00:37:23.979366   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:23.979374   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:23.979435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:24.017570   79191 cri.go:89] found id: ""
	I0816 00:37:24.017597   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.017607   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:24.017614   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:24.017684   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:24.051522   79191 cri.go:89] found id: ""
	I0816 00:37:24.051546   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.051555   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:24.051562   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:24.051626   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:24.087536   79191 cri.go:89] found id: ""
	I0816 00:37:24.087561   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.087572   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:24.087579   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:24.087644   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:24.123203   79191 cri.go:89] found id: ""
	I0816 00:37:24.123233   79191 logs.go:276] 0 containers: []
	W0816 00:37:24.123245   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:24.123256   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:24.123276   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:24.178185   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:24.178225   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:24.192895   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:24.192920   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:24.273471   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:24.273492   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:24.273504   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:24.357890   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:24.357936   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:23.495269   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.994859   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.427328   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.927068   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:25.376932   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:27.377168   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:29.876182   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:26.950399   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:26.964347   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:26.964406   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:27.004694   79191 cri.go:89] found id: ""
	I0816 00:37:27.004722   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.004738   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:27.004745   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:27.004800   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:27.040051   79191 cri.go:89] found id: ""
	I0816 00:37:27.040080   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.040090   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:27.040096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:27.040144   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:27.088614   79191 cri.go:89] found id: ""
	I0816 00:37:27.088642   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.088651   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:27.088657   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:27.088732   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:27.125427   79191 cri.go:89] found id: ""
	I0816 00:37:27.125450   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.125457   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:27.125464   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:27.125511   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:27.158562   79191 cri.go:89] found id: ""
	I0816 00:37:27.158592   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.158602   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:27.158609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:27.158672   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:27.192986   79191 cri.go:89] found id: ""
	I0816 00:37:27.193015   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.193026   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:27.193034   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:27.193091   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:27.228786   79191 cri.go:89] found id: ""
	I0816 00:37:27.228828   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.228847   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:27.228858   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:27.228921   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:27.262776   79191 cri.go:89] found id: ""
	I0816 00:37:27.262808   79191 logs.go:276] 0 containers: []
	W0816 00:37:27.262819   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:27.262829   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:27.262844   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:27.276444   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:27.276470   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:27.349918   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:27.349946   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:27.349958   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:27.435030   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:27.435061   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:27.484043   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:27.484069   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.038376   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:30.051467   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:30.051530   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:30.086346   79191 cri.go:89] found id: ""
	I0816 00:37:30.086376   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.086386   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:30.086394   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:30.086454   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:30.127665   79191 cri.go:89] found id: ""
	I0816 00:37:30.127691   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.127699   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:30.127704   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:30.127757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:30.169901   79191 cri.go:89] found id: ""
	I0816 00:37:30.169929   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.169939   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:30.169950   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:30.170013   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:30.212501   79191 cri.go:89] found id: ""
	I0816 00:37:30.212523   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.212530   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:30.212537   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:30.212584   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:30.256560   79191 cri.go:89] found id: ""
	I0816 00:37:30.256583   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.256591   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:30.256597   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:30.256646   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:30.291062   79191 cri.go:89] found id: ""
	I0816 00:37:30.291086   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.291093   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:30.291099   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:30.291143   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:30.328325   79191 cri.go:89] found id: ""
	I0816 00:37:30.328353   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.328361   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:30.328368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:30.328415   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:30.364946   79191 cri.go:89] found id: ""
	I0816 00:37:30.364972   79191 logs.go:276] 0 containers: []
	W0816 00:37:30.364981   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:30.364991   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:30.365005   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:30.408090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:30.408117   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:30.463421   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:30.463456   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:30.479679   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:30.479711   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:30.555394   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:30.555416   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:30.555432   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:28.494477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.494598   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:30.427146   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:32.926282   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:31.877446   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.376145   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:33.137366   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:33.150970   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:33.151030   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:33.191020   79191 cri.go:89] found id: ""
	I0816 00:37:33.191047   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.191055   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:33.191061   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:33.191112   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:33.227971   79191 cri.go:89] found id: ""
	I0816 00:37:33.228022   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.228030   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:33.228038   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:33.228089   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:33.265036   79191 cri.go:89] found id: ""
	I0816 00:37:33.265065   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.265074   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:33.265079   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:33.265126   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:33.300385   79191 cri.go:89] found id: ""
	I0816 00:37:33.300411   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.300418   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:33.300425   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:33.300487   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:33.335727   79191 cri.go:89] found id: ""
	I0816 00:37:33.335757   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.335768   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:33.335776   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:33.335839   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:33.373458   79191 cri.go:89] found id: ""
	I0816 00:37:33.373489   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.373500   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:33.373507   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:33.373568   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:33.410380   79191 cri.go:89] found id: ""
	I0816 00:37:33.410404   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.410413   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:33.410420   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:33.410480   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:33.451007   79191 cri.go:89] found id: ""
	I0816 00:37:33.451030   79191 logs.go:276] 0 containers: []
	W0816 00:37:33.451040   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:33.451049   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:33.451062   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:33.502215   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:33.502249   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:33.516123   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:33.516152   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:33.590898   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:33.590921   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:33.590944   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:33.668404   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:33.668455   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:36.209671   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:36.223498   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:36.223561   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:36.258980   79191 cri.go:89] found id: ""
	I0816 00:37:36.259041   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.259056   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:36.259064   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:36.259123   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:36.293659   79191 cri.go:89] found id: ""
	I0816 00:37:36.293687   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.293694   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:36.293703   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:36.293761   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:36.331729   79191 cri.go:89] found id: ""
	I0816 00:37:36.331756   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.331766   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:36.331773   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:36.331830   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:36.368441   79191 cri.go:89] found id: ""
	I0816 00:37:36.368470   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.368479   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:36.368486   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:36.368533   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:36.405338   79191 cri.go:89] found id: ""
	I0816 00:37:36.405368   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.405380   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:36.405389   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:36.405448   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:36.441986   79191 cri.go:89] found id: ""
	I0816 00:37:36.442018   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.442029   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:36.442038   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:36.442097   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:36.478102   79191 cri.go:89] found id: ""
	I0816 00:37:36.478183   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.478197   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:36.478206   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:36.478269   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:36.517138   79191 cri.go:89] found id: ""
	I0816 00:37:36.517167   79191 logs.go:276] 0 containers: []
	W0816 00:37:36.517178   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:36.517190   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:36.517205   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:36.570009   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:36.570042   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:36.583534   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:36.583565   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:36.651765   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:36.651794   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:36.651808   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:36.732836   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:36.732870   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:32.495090   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.996253   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:34.926615   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:37.425790   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:36.377305   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:38.876443   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.274490   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:39.288528   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:39.288591   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:39.325560   79191 cri.go:89] found id: ""
	I0816 00:37:39.325582   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.325589   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:39.325599   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:39.325656   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:39.365795   79191 cri.go:89] found id: ""
	I0816 00:37:39.365822   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.365829   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:39.365837   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:39.365906   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:39.404933   79191 cri.go:89] found id: ""
	I0816 00:37:39.404961   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.404971   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:39.404977   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:39.405041   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:39.442712   79191 cri.go:89] found id: ""
	I0816 00:37:39.442736   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.442747   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:39.442754   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:39.442814   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:39.484533   79191 cri.go:89] found id: ""
	I0816 00:37:39.484557   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.484566   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:39.484573   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:39.484636   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:39.522089   79191 cri.go:89] found id: ""
	I0816 00:37:39.522115   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.522125   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:39.522133   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:39.522194   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:39.557099   79191 cri.go:89] found id: ""
	I0816 00:37:39.557128   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.557138   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:39.557145   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:39.557205   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:39.594809   79191 cri.go:89] found id: ""
	I0816 00:37:39.594838   79191 logs.go:276] 0 containers: []
	W0816 00:37:39.594849   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:39.594859   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:39.594874   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:39.611079   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:39.611110   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:39.683156   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:39.683182   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:39.683198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:39.761198   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:39.761235   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:39.800972   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:39.801003   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:37.494553   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.495854   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:39.427910   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.926445   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:41.376128   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:43.377791   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:42.354816   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:42.368610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:42.368673   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:42.404716   79191 cri.go:89] found id: ""
	I0816 00:37:42.404738   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.404745   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:42.404753   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:42.404798   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:42.441619   79191 cri.go:89] found id: ""
	I0816 00:37:42.441649   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.441660   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:42.441667   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:42.441726   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:42.480928   79191 cri.go:89] found id: ""
	I0816 00:37:42.480965   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.480976   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:42.480983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:42.481051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:42.519187   79191 cri.go:89] found id: ""
	I0816 00:37:42.519216   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.519226   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:42.519234   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:42.519292   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:42.554928   79191 cri.go:89] found id: ""
	I0816 00:37:42.554956   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.554967   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:42.554974   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:42.555035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:42.593436   79191 cri.go:89] found id: ""
	I0816 00:37:42.593472   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.593481   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:42.593487   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:42.593545   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:42.628078   79191 cri.go:89] found id: ""
	I0816 00:37:42.628101   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.628108   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:42.628113   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:42.628172   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:42.662824   79191 cri.go:89] found id: ""
	I0816 00:37:42.662852   79191 logs.go:276] 0 containers: []
	W0816 00:37:42.662862   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:42.662871   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:42.662888   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:42.677267   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:42.677290   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:42.749570   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:42.749599   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:42.749615   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:42.831177   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:42.831213   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:42.871928   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:42.871957   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.430704   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:45.444400   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:45.444461   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:45.479503   79191 cri.go:89] found id: ""
	I0816 00:37:45.479529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.479537   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:45.479543   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:45.479596   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:45.518877   79191 cri.go:89] found id: ""
	I0816 00:37:45.518907   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.518917   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:45.518925   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:45.518992   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:45.553936   79191 cri.go:89] found id: ""
	I0816 00:37:45.553966   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.553977   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:45.553984   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:45.554035   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:45.593054   79191 cri.go:89] found id: ""
	I0816 00:37:45.593081   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.593088   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:45.593095   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:45.593147   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:45.631503   79191 cri.go:89] found id: ""
	I0816 00:37:45.631529   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.631537   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:45.631543   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:45.631599   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:45.667435   79191 cri.go:89] found id: ""
	I0816 00:37:45.667459   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.667466   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:45.667473   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:45.667529   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:45.702140   79191 cri.go:89] found id: ""
	I0816 00:37:45.702168   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.702179   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:45.702187   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:45.702250   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:45.736015   79191 cri.go:89] found id: ""
	I0816 00:37:45.736048   79191 logs.go:276] 0 containers: []
	W0816 00:37:45.736059   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:45.736070   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:45.736085   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:45.817392   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:45.817427   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:45.856421   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:45.856451   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:45.912429   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:45.912476   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:45.928411   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:45.928435   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:46.001141   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:41.995835   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.497033   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:44.426414   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:46.927720   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:45.876721   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:47.877185   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.877396   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.501317   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:48.515114   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:48.515190   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:48.553776   79191 cri.go:89] found id: ""
	I0816 00:37:48.553802   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.553810   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:48.553816   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:48.553890   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:48.589760   79191 cri.go:89] found id: ""
	I0816 00:37:48.589786   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.589794   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:48.589800   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:48.589871   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:48.629792   79191 cri.go:89] found id: ""
	I0816 00:37:48.629816   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.629825   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:48.629833   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:48.629898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:48.668824   79191 cri.go:89] found id: ""
	I0816 00:37:48.668852   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.668860   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:48.668866   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:48.668930   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:48.704584   79191 cri.go:89] found id: ""
	I0816 00:37:48.704615   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.704626   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:48.704634   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:48.704691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:48.738833   79191 cri.go:89] found id: ""
	I0816 00:37:48.738855   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.738863   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:48.738868   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:48.738928   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:48.774943   79191 cri.go:89] found id: ""
	I0816 00:37:48.774972   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.774981   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:48.774989   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:48.775051   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:48.808802   79191 cri.go:89] found id: ""
	I0816 00:37:48.808825   79191 logs.go:276] 0 containers: []
	W0816 00:37:48.808832   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:48.808841   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:48.808856   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:48.858849   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:48.858880   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:48.873338   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:48.873369   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:48.950172   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:48.950195   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:48.950209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:49.038642   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:49.038679   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:51.581947   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:51.596612   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:51.596691   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:51.631468   79191 cri.go:89] found id: ""
	I0816 00:37:51.631498   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.631509   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:51.631517   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:51.631577   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:51.666922   79191 cri.go:89] found id: ""
	I0816 00:37:51.666953   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.666963   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:51.666971   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:51.667034   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:51.707081   79191 cri.go:89] found id: ""
	I0816 00:37:51.707109   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.707116   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:51.707122   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:51.707189   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:51.743884   79191 cri.go:89] found id: ""
	I0816 00:37:51.743912   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.743925   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:51.743932   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:51.743990   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:51.779565   79191 cri.go:89] found id: ""
	I0816 00:37:51.779595   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.779603   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:51.779610   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:51.779658   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:46.994211   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:48.995446   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.495519   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:49.426703   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.426947   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:53.427050   78713 pod_ready.go:103] pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:52.377050   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.877759   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:51.818800   79191 cri.go:89] found id: ""
	I0816 00:37:51.818824   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.818831   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:51.818837   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:51.818899   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:51.855343   79191 cri.go:89] found id: ""
	I0816 00:37:51.855367   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.855374   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:51.855380   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:51.855426   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:51.890463   79191 cri.go:89] found id: ""
	I0816 00:37:51.890496   79191 logs.go:276] 0 containers: []
	W0816 00:37:51.890505   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:51.890513   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:51.890526   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:51.977168   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:51.977209   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:52.021626   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:52.021660   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:52.076983   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:52.077027   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:52.092111   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:52.092142   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:52.172738   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:54.673192   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:54.688780   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.688853   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.725279   79191 cri.go:89] found id: ""
	I0816 00:37:54.725308   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.725318   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:54.725325   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.725383   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:54.764326   79191 cri.go:89] found id: ""
	I0816 00:37:54.764353   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.764364   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:54.764372   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:54.764423   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:54.805221   79191 cri.go:89] found id: ""
	I0816 00:37:54.805252   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.805263   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:54.805270   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:54.805334   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:54.849724   79191 cri.go:89] found id: ""
	I0816 00:37:54.849750   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.849759   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:54.849765   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:54.849824   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:54.894438   79191 cri.go:89] found id: ""
	I0816 00:37:54.894460   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.894468   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:54.894475   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:54.894532   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:54.933400   79191 cri.go:89] found id: ""
	I0816 00:37:54.933422   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.933431   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:54.933439   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:54.933497   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:54.982249   79191 cri.go:89] found id: ""
	I0816 00:37:54.982277   79191 logs.go:276] 0 containers: []
	W0816 00:37:54.982286   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:54.982294   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:54.982353   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:55.024431   79191 cri.go:89] found id: ""
	I0816 00:37:55.024458   79191 logs.go:276] 0 containers: []
	W0816 00:37:55.024469   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:55.024479   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.024499   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:55.107089   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:55.107119   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:55.148949   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.148981   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.202865   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.202902   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.218528   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.218556   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:55.304995   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:53.495576   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:55.995483   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:54.926671   78713 pod_ready.go:82] duration metric: took 4m0.007058537s for pod "metrics-server-6867b74b74-pnmsm" in "kube-system" namespace to be "Ready" ...
	E0816 00:37:54.926700   78713 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:37:54.926711   78713 pod_ready.go:39] duration metric: took 4m7.919515966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:37:54.926728   78713 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:37:54.926764   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:54.926821   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:54.983024   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:54.983043   78713 cri.go:89] found id: ""
	I0816 00:37:54.983052   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:54.983103   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:54.988579   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:54.988644   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:55.035200   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.035231   78713 cri.go:89] found id: ""
	I0816 00:37:55.035241   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:55.035291   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.040701   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:55.040777   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:55.087306   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.087330   78713 cri.go:89] found id: ""
	I0816 00:37:55.087340   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:55.087422   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.092492   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:55.092560   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:55.144398   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.144424   78713 cri.go:89] found id: ""
	I0816 00:37:55.144433   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:55.144494   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.149882   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:55.149953   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:55.193442   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.193464   78713 cri.go:89] found id: ""
	I0816 00:37:55.193472   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:55.193528   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.198812   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:55.198886   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:55.238634   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.238656   78713 cri.go:89] found id: ""
	I0816 00:37:55.238666   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:55.238729   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.243141   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:55.243229   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:55.281414   78713 cri.go:89] found id: ""
	I0816 00:37:55.281439   78713 logs.go:276] 0 containers: []
	W0816 00:37:55.281449   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:55.281457   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:55.281519   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:55.319336   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.319357   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.319363   78713 cri.go:89] found id: ""
	I0816 00:37:55.319371   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:55.319431   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.323837   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:55.328777   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:55.328801   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:55.376259   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:55.376290   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:55.419553   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:55.419584   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:55.476026   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:55.476058   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:55.544263   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:55.544297   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:55.561818   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:55.561858   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:55.701342   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:55.701375   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:55.746935   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:55.746968   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:55.787200   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:55.787234   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:55.825257   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:55.825282   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:55.865569   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:55.865594   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:55.905234   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:55.905269   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:56.391175   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:56.391208   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:58.943163   78713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:58.961551   78713 api_server.go:72] duration metric: took 4m17.689832084s to wait for apiserver process to appear ...
	I0816 00:37:58.961592   78713 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:37:58.961630   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:58.961697   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:59.001773   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.001794   78713 cri.go:89] found id: ""
	I0816 00:37:59.001803   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:37:59.001876   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.006168   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:59.006222   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:59.041625   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.041647   78713 cri.go:89] found id: ""
	I0816 00:37:59.041654   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:37:59.041715   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.046258   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:59.046323   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:59.086070   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.086089   78713 cri.go:89] found id: ""
	I0816 00:37:59.086097   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:37:59.086151   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.090556   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:59.090626   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:59.129889   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.129931   78713 cri.go:89] found id: ""
	I0816 00:37:59.129942   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:37:59.130008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.135694   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:59.135775   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.375656   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.375979   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:57.805335   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:37:57.819904   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:37:57.819989   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:37:57.856119   79191 cri.go:89] found id: ""
	I0816 00:37:57.856146   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.856153   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:37:57.856160   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:37:57.856217   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:37:57.892797   79191 cri.go:89] found id: ""
	I0816 00:37:57.892825   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.892833   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:37:57.892841   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:37:57.892905   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:37:57.928753   79191 cri.go:89] found id: ""
	I0816 00:37:57.928784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.928795   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:37:57.928803   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:37:57.928884   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:37:57.963432   79191 cri.go:89] found id: ""
	I0816 00:37:57.963462   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.963474   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:37:57.963481   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:37:57.963538   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:37:57.998759   79191 cri.go:89] found id: ""
	I0816 00:37:57.998784   79191 logs.go:276] 0 containers: []
	W0816 00:37:57.998793   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:37:57.998801   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:57.998886   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:58.035262   79191 cri.go:89] found id: ""
	I0816 00:37:58.035288   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.035296   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:37:58.035303   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:58.035358   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:58.071052   79191 cri.go:89] found id: ""
	I0816 00:37:58.071079   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.071087   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:58.071092   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:37:58.071150   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:37:58.110047   79191 cri.go:89] found id: ""
	I0816 00:37:58.110074   79191 logs.go:276] 0 containers: []
	W0816 00:37:58.110083   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:37:58.110090   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:58.110101   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:58.164792   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:58.164823   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:58.178742   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:58.178770   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:37:58.251861   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:37:58.251899   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:58.251921   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:37:58.329805   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:37:58.329859   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:00.872911   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:00.887914   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:00.887986   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:00.925562   79191 cri.go:89] found id: ""
	I0816 00:38:00.925595   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.925606   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:00.925615   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:00.925669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:00.961476   79191 cri.go:89] found id: ""
	I0816 00:38:00.961498   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.961505   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:00.961510   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:00.961554   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:00.997575   79191 cri.go:89] found id: ""
	I0816 00:38:00.997599   79191 logs.go:276] 0 containers: []
	W0816 00:38:00.997608   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:00.997616   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:00.997677   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:01.035130   79191 cri.go:89] found id: ""
	I0816 00:38:01.035158   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.035169   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:01.035177   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:01.035232   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:01.073768   79191 cri.go:89] found id: ""
	I0816 00:38:01.073800   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.073811   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:01.073819   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:01.073898   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:01.107904   79191 cri.go:89] found id: ""
	I0816 00:38:01.107928   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.107937   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:01.107943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:01.108004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:01.142654   79191 cri.go:89] found id: ""
	I0816 00:38:01.142690   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.142701   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:01.142709   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:01.142766   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:01.187565   79191 cri.go:89] found id: ""
	I0816 00:38:01.187599   79191 logs.go:276] 0 containers: []
	W0816 00:38:01.187610   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:01.187621   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:01.187635   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:01.265462   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:01.265493   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:01.265508   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:01.346988   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:01.347020   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:01.390977   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:01.391006   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.443858   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:01.443892   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:57.996188   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:00.495210   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:37:59.176702   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.176728   78713 cri.go:89] found id: ""
	I0816 00:37:59.176738   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:37:59.176799   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.182305   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:37:59.182387   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:37:59.223938   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.223960   78713 cri.go:89] found id: ""
	I0816 00:37:59.223968   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:37:59.224023   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.228818   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:37:59.228884   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:37:59.264566   78713 cri.go:89] found id: ""
	I0816 00:37:59.264589   78713 logs.go:276] 0 containers: []
	W0816 00:37:59.264597   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:37:59.264606   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:37:59.264654   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:37:59.302534   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.302560   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.302565   78713 cri.go:89] found id: ""
	I0816 00:37:59.302574   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:37:59.302621   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.307021   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:37:59.311258   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:37:59.311299   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:37:59.425542   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:37:59.425574   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:37:59.466078   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:37:59.466107   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:37:59.480894   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:37:59.480925   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:37:59.524790   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:37:59.524822   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:37:59.568832   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:37:59.568862   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:37:59.619399   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:37:59.619433   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:37:59.658616   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:37:59.658645   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:37:59.720421   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:37:59.720469   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:37:59.756558   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:37:59.756586   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:37:59.798650   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:37:59.798674   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:37:59.864280   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:37:59.864323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:37:59.913086   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:37:59.913118   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:02.828194   78713 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0816 00:38:02.832896   78713 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0816 00:38:02.834035   78713 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:02.834059   78713 api_server.go:131] duration metric: took 3.87246001s to wait for apiserver health ...
	I0816 00:38:02.834067   78713 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:02.834089   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:02.834145   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:02.873489   78713 cri.go:89] found id: "a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:02.873512   78713 cri.go:89] found id: ""
	I0816 00:38:02.873521   78713 logs.go:276] 1 containers: [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6]
	I0816 00:38:02.873577   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.878807   78713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:02.878883   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:02.919930   78713 cri.go:89] found id: "a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:02.919949   78713 cri.go:89] found id: ""
	I0816 00:38:02.919957   78713 logs.go:276] 1 containers: [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a]
	I0816 00:38:02.920008   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.924459   78713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:02.924525   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:02.964609   78713 cri.go:89] found id: "8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:02.964636   78713 cri.go:89] found id: ""
	I0816 00:38:02.964644   78713 logs.go:276] 1 containers: [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5]
	I0816 00:38:02.964697   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:02.968808   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:02.968921   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:03.017177   78713 cri.go:89] found id: "dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.017201   78713 cri.go:89] found id: ""
	I0816 00:38:03.017210   78713 logs.go:276] 1 containers: [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3]
	I0816 00:38:03.017275   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.021905   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:03.021992   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:03.061720   78713 cri.go:89] found id: "513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.061741   78713 cri.go:89] found id: ""
	I0816 00:38:03.061748   78713 logs.go:276] 1 containers: [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110]
	I0816 00:38:03.061801   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.066149   78713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:03.066206   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:03.107130   78713 cri.go:89] found id: "2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.107149   78713 cri.go:89] found id: ""
	I0816 00:38:03.107156   78713 logs.go:276] 1 containers: [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2]
	I0816 00:38:03.107213   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.111323   78713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:03.111372   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:03.149906   78713 cri.go:89] found id: ""
	I0816 00:38:03.149927   78713 logs.go:276] 0 containers: []
	W0816 00:38:03.149934   78713 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:03.149940   78713 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:03.150000   78713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:03.190981   78713 cri.go:89] found id: "2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.191007   78713 cri.go:89] found id: "a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.191011   78713 cri.go:89] found id: ""
	I0816 00:38:03.191018   78713 logs.go:276] 2 containers: [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da]
	I0816 00:38:03.191066   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.195733   78713 ssh_runner.go:195] Run: which crictl
	I0816 00:38:03.199755   78713 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:03.199775   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:03.302209   78713 logs.go:123] Gathering logs for kube-apiserver [a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6] ...
	I0816 00:38:03.302239   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a17b85fff475953da93ed26926ac30f36e3b0d3cac4351e3488be872941a17b6"
	I0816 00:38:03.352505   78713 logs.go:123] Gathering logs for kube-scheduler [dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3] ...
	I0816 00:38:03.352548   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcadfb0e989757d4053cd3ba44f9201788bfb7f12efaaf12d8c9a2cbc9ebf9b3"
	I0816 00:38:03.392296   78713 logs.go:123] Gathering logs for kube-controller-manager [2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2] ...
	I0816 00:38:03.392323   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cc275164414532d2421799a853622acd2b22c3dc67db17d1af7e3e2e386a4c2"
	I0816 00:38:03.448092   78713 logs.go:123] Gathering logs for storage-provisioner [2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7] ...
	I0816 00:38:03.448130   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ba9e1d7af63a518e4b0562b3a22af24b99e4ba0ed90c68f3a467c770c1dd9d7"
	I0816 00:38:03.487516   78713 logs.go:123] Gathering logs for container status ...
	I0816 00:38:03.487541   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:03.541954   78713 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:03.541989   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:03.557026   78713 logs.go:123] Gathering logs for etcd [a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a] ...
	I0816 00:38:03.557049   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a23eed518f172353d5d55b85bf4f560ad2f739a87d660ed3d3ee36c82b7e289a"
	I0816 00:38:03.602639   78713 logs.go:123] Gathering logs for coredns [8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5] ...
	I0816 00:38:03.602670   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ecab8c44d72ad3bf3ce741d758aebac956e43c938920e883daa4c627d9100d5"
	I0816 00:38:03.642706   78713 logs.go:123] Gathering logs for kube-proxy [513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110] ...
	I0816 00:38:03.642733   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 513d50297bc22da1095275a7af8489c5b9b0df73fcc9ea829f85786b885dd110"
	I0816 00:38:03.683504   78713 logs.go:123] Gathering logs for storage-provisioner [a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da] ...
	I0816 00:38:03.683530   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14a1aef37ee32a9a81b5d585ebb2366371a1f4e6939c955307d1325048443da"
	I0816 00:38:03.721802   78713 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:03.721826   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.089579   78713 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.089621   78713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:01.376613   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:03.376837   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:06.679744   78713 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:06.679797   78713 system_pods.go:61] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.679805   78713 system_pods.go:61] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.679812   78713 system_pods.go:61] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.679819   78713 system_pods.go:61] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.679825   78713 system_pods.go:61] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.679849   78713 system_pods.go:61] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.679861   78713 system_pods.go:61] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.679869   78713 system_pods.go:61] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.679878   78713 system_pods.go:74] duration metric: took 3.845804999s to wait for pod list to return data ...
	I0816 00:38:06.679886   78713 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:06.682521   78713 default_sa.go:45] found service account: "default"
	I0816 00:38:06.682553   78713 default_sa.go:55] duration metric: took 2.660224ms for default service account to be created ...
	I0816 00:38:06.682565   78713 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:06.688149   78713 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:06.688178   78713 system_pods.go:89] "coredns-6f6b679f8f-54gqb" [6afa917f-9b07-46e9-95d3-ff8ff5e2a2fc] Running
	I0816 00:38:06.688183   78713 system_pods.go:89] "etcd-embed-certs-758469" [dffcf4e1-cb5c-4bbe-8990-a2713f4c91eb] Running
	I0816 00:38:06.688187   78713 system_pods.go:89] "kube-apiserver-embed-certs-758469" [cdb73311-f401-4a0a-89e2-409426970b16] Running
	I0816 00:38:06.688192   78713 system_pods.go:89] "kube-controller-manager-embed-certs-758469" [27e74bab-455f-4313-bffe-2cfa7764774b] Running
	I0816 00:38:06.688196   78713 system_pods.go:89] "kube-proxy-4xc89" [04b4bb32-a0cf-4147-957d-83b3ed13ab06] Running
	I0816 00:38:06.688199   78713 system_pods.go:89] "kube-scheduler-embed-certs-758469" [56a91710-aee3-4b89-bc73-0a0bc08a1be3] Running
	I0816 00:38:06.688206   78713 system_pods.go:89] "metrics-server-6867b74b74-pnmsm" [1fb83d03-46c2-4455-9455-e35c0a968ff1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:06.688213   78713 system_pods.go:89] "storage-provisioner" [caae6cfe-efca-4626-95d1-321af01f2095] Running
	I0816 00:38:06.688220   78713 system_pods.go:126] duration metric: took 5.649758ms to wait for k8s-apps to be running ...
	I0816 00:38:06.688226   78713 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:06.688268   78713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:06.706263   78713 system_svc.go:56] duration metric: took 18.025675ms WaitForService to wait for kubelet
	I0816 00:38:06.706301   78713 kubeadm.go:582] duration metric: took 4m25.434584326s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:06.706337   78713 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:06.709536   78713 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:06.709553   78713 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:06.709565   78713 node_conditions.go:105] duration metric: took 3.213145ms to run NodePressure ...
	I0816 00:38:06.709576   78713 start.go:241] waiting for startup goroutines ...
	I0816 00:38:06.709582   78713 start.go:246] waiting for cluster config update ...
	I0816 00:38:06.709593   78713 start.go:255] writing updated cluster config ...
	I0816 00:38:06.709864   78713 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:06.755974   78713 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:06.757917   78713 out.go:177] * Done! kubectl is now configured to use "embed-certs-758469" cluster and "default" namespace by default
	I0816 00:38:03.959040   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:03.973674   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:03.973758   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:04.013606   79191 cri.go:89] found id: ""
	I0816 00:38:04.013653   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.013661   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:04.013667   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:04.013737   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:04.054558   79191 cri.go:89] found id: ""
	I0816 00:38:04.054590   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.054602   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:04.054609   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:04.054667   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:04.097116   79191 cri.go:89] found id: ""
	I0816 00:38:04.097143   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.097154   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:04.097162   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:04.097223   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:04.136770   79191 cri.go:89] found id: ""
	I0816 00:38:04.136798   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.136809   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:04.136816   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:04.136865   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:04.171906   79191 cri.go:89] found id: ""
	I0816 00:38:04.171929   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.171937   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:04.171943   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:04.172004   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:04.208694   79191 cri.go:89] found id: ""
	I0816 00:38:04.208725   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.208735   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:04.208744   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:04.208803   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:04.276713   79191 cri.go:89] found id: ""
	I0816 00:38:04.276744   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.276755   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:04.276763   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:04.276823   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:04.316646   79191 cri.go:89] found id: ""
	I0816 00:38:04.316669   79191 logs.go:276] 0 containers: []
	W0816 00:38:04.316696   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:04.316707   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:04.316722   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:04.329819   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:04.329864   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:04.399032   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:04.399052   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:04.399080   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:04.487665   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:04.487698   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:04.530937   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:04.530962   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:02.496317   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:04.496477   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:05.878535   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:08.377096   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:07.087584   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:07.102015   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:07.102086   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:07.139530   79191 cri.go:89] found id: ""
	I0816 00:38:07.139559   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.139569   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:07.139577   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:07.139642   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:07.179630   79191 cri.go:89] found id: ""
	I0816 00:38:07.179659   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.179669   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:07.179675   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:07.179734   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:07.216407   79191 cri.go:89] found id: ""
	I0816 00:38:07.216435   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.216444   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:07.216449   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:07.216509   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:07.252511   79191 cri.go:89] found id: ""
	I0816 00:38:07.252536   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.252544   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:07.252551   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:07.252613   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:07.288651   79191 cri.go:89] found id: ""
	I0816 00:38:07.288679   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.288689   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:07.288698   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:07.288757   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:07.325910   79191 cri.go:89] found id: ""
	I0816 00:38:07.325963   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.325974   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:07.325982   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:07.326046   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:07.362202   79191 cri.go:89] found id: ""
	I0816 00:38:07.362230   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.362244   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:07.362251   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:07.362316   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:07.405272   79191 cri.go:89] found id: ""
	I0816 00:38:07.405302   79191 logs.go:276] 0 containers: []
	W0816 00:38:07.405313   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:07.405324   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:07.405339   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:07.461186   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:07.461222   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:07.475503   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:07.475544   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:07.555146   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:07.555165   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:07.555179   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:07.635162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:07.635201   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.174600   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:10.190418   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:10.190479   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:10.251925   79191 cri.go:89] found id: ""
	I0816 00:38:10.251960   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.251969   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:10.251974   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:10.252027   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:10.289038   79191 cri.go:89] found id: ""
	I0816 00:38:10.289078   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.289088   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:10.289096   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:10.289153   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:10.334562   79191 cri.go:89] found id: ""
	I0816 00:38:10.334591   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.334601   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:10.334609   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:10.334669   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:10.371971   79191 cri.go:89] found id: ""
	I0816 00:38:10.372000   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.372010   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:10.372018   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:10.372084   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:10.409654   79191 cri.go:89] found id: ""
	I0816 00:38:10.409685   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.409696   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:10.409703   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:10.409770   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:10.446639   79191 cri.go:89] found id: ""
	I0816 00:38:10.446666   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.446675   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:10.446683   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:10.446750   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:10.483601   79191 cri.go:89] found id: ""
	I0816 00:38:10.483629   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.483641   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:10.483648   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:10.483707   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:10.519640   79191 cri.go:89] found id: ""
	I0816 00:38:10.519670   79191 logs.go:276] 0 containers: []
	W0816 00:38:10.519679   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:10.519690   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:10.519704   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:10.603281   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:10.603300   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:10.603311   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:10.689162   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:10.689198   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:10.730701   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:10.730724   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:10.780411   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:10.780441   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:06.997726   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:09.495539   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.495753   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:10.876242   78747 pod_ready.go:103] pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:11.376332   78747 pod_ready.go:82] duration metric: took 4m0.006460655s for pod "metrics-server-6867b74b74-sxqkg" in "kube-system" namespace to be "Ready" ...
	E0816 00:38:11.376362   78747 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 00:38:11.376372   78747 pod_ready.go:39] duration metric: took 4m3.906659924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:38:11.376389   78747 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:38:11.376416   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:11.376472   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:11.425716   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:11.425741   78747 cri.go:89] found id: ""
	I0816 00:38:11.425749   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:11.425804   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.431122   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:11.431195   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:11.468622   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:11.468647   78747 cri.go:89] found id: ""
	I0816 00:38:11.468657   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:11.468713   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.474270   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:11.474329   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:11.518448   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:11.518493   78747 cri.go:89] found id: ""
	I0816 00:38:11.518502   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:11.518569   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.524185   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:11.524242   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:11.561343   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:11.561367   78747 cri.go:89] found id: ""
	I0816 00:38:11.561374   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:11.561418   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.565918   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:11.565992   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:11.606010   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.606036   78747 cri.go:89] found id: ""
	I0816 00:38:11.606043   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:11.606097   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.610096   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:11.610166   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:11.646204   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:11.646229   78747 cri.go:89] found id: ""
	I0816 00:38:11.646238   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:11.646295   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.650405   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:11.650467   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:11.690407   78747 cri.go:89] found id: ""
	I0816 00:38:11.690436   78747 logs.go:276] 0 containers: []
	W0816 00:38:11.690446   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:11.690454   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:11.690510   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:11.736695   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:11.736722   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:11.736729   78747 cri.go:89] found id: ""
	I0816 00:38:11.736738   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:11.736803   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.741022   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:11.744983   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:11.745011   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:11.791452   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:11.791484   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:12.304425   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:12.304470   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:12.341318   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:12.341353   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:12.401425   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:12.401464   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:12.476598   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:12.476653   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:12.495594   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:12.495629   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:12.645961   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:12.645991   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:12.697058   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:12.697091   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:12.749085   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:12.749117   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:12.795786   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:12.795831   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:12.835928   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:12.835959   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:12.872495   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:12.872524   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:13.294689   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:13.308762   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:13.308822   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:13.345973   79191 cri.go:89] found id: ""
	I0816 00:38:13.346004   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.346015   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:38:13.346022   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:13.346083   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:13.382905   79191 cri.go:89] found id: ""
	I0816 00:38:13.382934   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.382945   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:38:13.382952   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:13.383001   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:13.417616   79191 cri.go:89] found id: ""
	I0816 00:38:13.417650   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.417662   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:38:13.417669   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:13.417739   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:13.453314   79191 cri.go:89] found id: ""
	I0816 00:38:13.453350   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.453360   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:38:13.453368   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:13.453435   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:13.488507   79191 cri.go:89] found id: ""
	I0816 00:38:13.488536   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.488547   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:38:13.488555   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:13.488614   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:13.527064   79191 cri.go:89] found id: ""
	I0816 00:38:13.527095   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.527108   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:38:13.527116   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:13.527178   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:13.562838   79191 cri.go:89] found id: ""
	I0816 00:38:13.562867   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.562876   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:13.562882   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:38:13.562944   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:38:13.598924   79191 cri.go:89] found id: ""
	I0816 00:38:13.598963   79191 logs.go:276] 0 containers: []
	W0816 00:38:13.598974   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:38:13.598985   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:13.598999   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:13.651122   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:13.651156   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:13.665255   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:13.665281   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:38:13.742117   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:38:13.742135   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:13.742148   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:13.824685   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:38:13.824719   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.366542   79191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:16.380855   79191 kubeadm.go:597] duration metric: took 4m3.665876253s to restartPrimaryControlPlane
	W0816 00:38:16.380919   79191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:38:16.380946   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:38:13.496702   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.996304   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:15.421355   78747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:38:15.437651   78747 api_server.go:72] duration metric: took 4m15.224557183s to wait for apiserver process to appear ...
	I0816 00:38:15.437677   78747 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:38:15.437721   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:15.437782   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:15.473240   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:15.473265   78747 cri.go:89] found id: ""
	I0816 00:38:15.473273   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:15.473335   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.477666   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:15.477734   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:15.526073   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:15.526095   78747 cri.go:89] found id: ""
	I0816 00:38:15.526104   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:15.526165   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.530706   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:15.530775   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:15.571124   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:15.571149   78747 cri.go:89] found id: ""
	I0816 00:38:15.571159   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:15.571217   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.578613   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:15.578690   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:15.617432   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:15.617454   78747 cri.go:89] found id: ""
	I0816 00:38:15.617464   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:15.617529   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.621818   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:15.621899   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:15.658963   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:15.658981   78747 cri.go:89] found id: ""
	I0816 00:38:15.658988   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:15.659037   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.663170   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:15.663230   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:15.699297   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.699322   78747 cri.go:89] found id: ""
	I0816 00:38:15.699331   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:15.699388   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.704029   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:15.704085   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:15.742790   78747 cri.go:89] found id: ""
	I0816 00:38:15.742816   78747 logs.go:276] 0 containers: []
	W0816 00:38:15.742825   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:15.742830   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:15.742875   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:15.776898   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:15.776918   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:15.776922   78747 cri.go:89] found id: ""
	I0816 00:38:15.776945   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:15.777007   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.781511   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:15.785953   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:15.785981   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:15.840461   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:15.840498   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:16.320285   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:16.320323   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:16.362171   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:16.362200   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:16.444803   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:16.444834   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:16.461705   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:16.461732   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:16.576190   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:16.576220   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:16.626407   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:16.626449   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:16.673004   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:16.673036   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:16.724770   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:16.724797   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:16.764812   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:16.764838   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:16.804268   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:16.804300   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:16.841197   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:16.841221   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.380352   78747 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I0816 00:38:19.386760   78747 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I0816 00:38:19.387751   78747 api_server.go:141] control plane version: v1.31.0
	I0816 00:38:19.387773   78747 api_server.go:131] duration metric: took 3.950088801s to wait for apiserver health ...
	I0816 00:38:19.387781   78747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:38:19.387801   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:38:19.387843   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:38:19.429928   78747 cri.go:89] found id: "169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:19.429952   78747 cri.go:89] found id: ""
	I0816 00:38:19.429961   78747 logs.go:276] 1 containers: [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46]
	I0816 00:38:19.430021   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.434822   78747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:38:19.434870   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:38:19.476789   78747 cri.go:89] found id: "d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:19.476811   78747 cri.go:89] found id: ""
	I0816 00:38:19.476819   78747 logs.go:276] 1 containers: [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87]
	I0816 00:38:19.476869   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.481574   78747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:38:19.481640   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:38:19.528718   78747 cri.go:89] found id: "15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:19.528742   78747 cri.go:89] found id: ""
	I0816 00:38:19.528750   78747 logs.go:276] 1 containers: [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c]
	I0816 00:38:19.528799   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.533391   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:38:19.533455   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:38:19.581356   78747 cri.go:89] found id: "eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:19.581374   78747 cri.go:89] found id: ""
	I0816 00:38:19.581381   78747 logs.go:276] 1 containers: [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60]
	I0816 00:38:19.581427   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.585915   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:38:19.585977   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:38:19.623514   78747 cri.go:89] found id: "9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:19.623544   78747 cri.go:89] found id: ""
	I0816 00:38:19.623552   78747 logs.go:276] 1 containers: [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8]
	I0816 00:38:19.623606   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.627652   78747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:38:19.627711   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:38:19.663933   78747 cri.go:89] found id: "84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:19.663957   78747 cri.go:89] found id: ""
	I0816 00:38:19.663967   78747 logs.go:276] 1 containers: [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86]
	I0816 00:38:19.664032   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.668093   78747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:38:19.668162   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:38:19.707688   78747 cri.go:89] found id: ""
	I0816 00:38:19.707716   78747 logs.go:276] 0 containers: []
	W0816 00:38:19.707726   78747 logs.go:278] No container was found matching "kindnet"
	I0816 00:38:19.707741   78747 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0816 00:38:19.707804   78747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 00:38:19.745900   78747 cri.go:89] found id: "31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:19.745930   78747 cri.go:89] found id: "d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:19.745935   78747 cri.go:89] found id: ""
	I0816 00:38:19.745944   78747 logs.go:276] 2 containers: [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae]
	I0816 00:38:19.746002   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.750934   78747 ssh_runner.go:195] Run: which crictl
	I0816 00:38:19.755022   78747 logs.go:123] Gathering logs for container status ...
	I0816 00:38:19.755044   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:38:19.807228   78747 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:38:19.807257   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 00:38:19.918242   78747 logs.go:123] Gathering logs for etcd [d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87] ...
	I0816 00:38:19.918274   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6e8ce8b4b577d7657fd4da5bf52ea2911d782f8020c726b1cc57c72b9dced87"
	I0816 00:38:21.772367   79191 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.39139467s)
	I0816 00:38:21.772449   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:18.495150   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:20.995073   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:19.969165   78747 logs.go:123] Gathering logs for coredns [15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c] ...
	I0816 00:38:19.969198   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15fd3e395581c3627866f39f3edfdeb4d711f4438b683559f4a02c71814cea8c"
	I0816 00:38:20.008945   78747 logs.go:123] Gathering logs for kube-proxy [9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8] ...
	I0816 00:38:20.008975   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9821dfda7cc4346391b8fe63a7caa7c6df9228cc3b8f75b25962378cc0bdd1b8"
	I0816 00:38:20.050080   78747 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:38:20.050120   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:38:20.450059   78747 logs.go:123] Gathering logs for storage-provisioner [31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51] ...
	I0816 00:38:20.450107   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31400c13619c18c611124b161b7957d5efb485dca9003f223c0a3f6e8b15cf51"
	I0816 00:38:20.490694   78747 logs.go:123] Gathering logs for storage-provisioner [d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae] ...
	I0816 00:38:20.490721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d624b2f88ce3e980e4bd58e0f72a1f91952b2a20c8363c5de1eb5c3f33dbb4ae"
	I0816 00:38:20.532856   78747 logs.go:123] Gathering logs for kubelet ...
	I0816 00:38:20.532890   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:38:20.609130   78747 logs.go:123] Gathering logs for dmesg ...
	I0816 00:38:20.609178   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 00:38:20.624248   78747 logs.go:123] Gathering logs for kube-apiserver [169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46] ...
	I0816 00:38:20.624279   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169a7e51493aa61dc355667a42a44c2fa2e35543ef6303b08cbb2f3193c7dd46"
	I0816 00:38:20.675636   78747 logs.go:123] Gathering logs for kube-scheduler [eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60] ...
	I0816 00:38:20.675669   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb4c36b11d03e2f8579f952cb573b4f3ca61074e9f91438fc41bca0c99591f60"
	I0816 00:38:20.716694   78747 logs.go:123] Gathering logs for kube-controller-manager [84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86] ...
	I0816 00:38:20.716721   78747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84380e27c5a9d0029f294d3f86b926a970576474b40c0c7939af706243444b86"
	I0816 00:38:23.289748   78747 system_pods.go:59] 8 kube-system pods found
	I0816 00:38:23.289773   78747 system_pods.go:61] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.289778   78747 system_pods.go:61] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.289782   78747 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.289786   78747 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.289789   78747 system_pods.go:61] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.289792   78747 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.289799   78747 system_pods.go:61] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.289814   78747 system_pods.go:61] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.289827   78747 system_pods.go:74] duration metric: took 3.902040304s to wait for pod list to return data ...
	I0816 00:38:23.289836   78747 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:38:23.293498   78747 default_sa.go:45] found service account: "default"
	I0816 00:38:23.293528   78747 default_sa.go:55] duration metric: took 3.671585ms for default service account to be created ...
	I0816 00:38:23.293539   78747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:38:23.298509   78747 system_pods.go:86] 8 kube-system pods found
	I0816 00:38:23.298534   78747 system_pods.go:89] "coredns-6f6b679f8f-4n9qq" [5611de0e-5480-4841-bfb5-68050fa068aa] Running
	I0816 00:38:23.298540   78747 system_pods.go:89] "etcd-default-k8s-diff-port-616827" [adc6b690-798d-4801-b4d2-3c0f126cce61] Running
	I0816 00:38:23.298545   78747 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-616827" [b6aafe35-6014-4f24-990c-858b27a3d774] Running
	I0816 00:38:23.298549   78747 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-616827" [94b3c751-ed69-4a87-b540-1da8e2227cb2] Running
	I0816 00:38:23.298552   78747 system_pods.go:89] "kube-proxy-f99ds" [3d8f9913-5496-4fda-800e-c942e714f13e] Running
	I0816 00:38:23.298556   78747 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-616827" [01dec7af-ba80-439f-9720-d93b518f512f] Running
	I0816 00:38:23.298561   78747 system_pods.go:89] "metrics-server-6867b74b74-sxqkg" [6443b455-56f9-4532-8156-847298f5e9eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:38:23.298567   78747 system_pods.go:89] "storage-provisioner" [fa790373-a4ce-4e37-ba86-c1b0ae1074ca] Running
	I0816 00:38:23.298576   78747 system_pods.go:126] duration metric: took 5.030455ms to wait for k8s-apps to be running ...
	I0816 00:38:23.298585   78747 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:38:23.298632   78747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:38:23.318383   78747 system_svc.go:56] duration metric: took 19.787836ms WaitForService to wait for kubelet
	I0816 00:38:23.318419   78747 kubeadm.go:582] duration metric: took 4m23.105331758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:38:23.318446   78747 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:38:23.322398   78747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:38:23.322425   78747 node_conditions.go:123] node cpu capacity is 2
	I0816 00:38:23.322436   78747 node_conditions.go:105] duration metric: took 3.985107ms to run NodePressure ...
	I0816 00:38:23.322447   78747 start.go:241] waiting for startup goroutines ...
	I0816 00:38:23.322454   78747 start.go:246] waiting for cluster config update ...
	I0816 00:38:23.322464   78747 start.go:255] writing updated cluster config ...
	I0816 00:38:23.322801   78747 ssh_runner.go:195] Run: rm -f paused
	I0816 00:38:23.374057   78747 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:38:23.376186   78747 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-616827" cluster and "default" namespace by default
	I0816 00:38:21.788969   79191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:38:21.800050   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:38:21.811193   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:38:21.811216   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:38:21.811260   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:38:21.821328   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:38:21.821391   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:38:21.831777   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:38:21.841357   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:38:21.841424   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:38:21.851564   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.861262   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:38:21.861322   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:38:21.871929   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:38:21.881544   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:38:21.881595   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:38:21.891725   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:38:22.120640   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:38:22.997351   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:25.494851   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:27.494976   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:29.495248   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:31.994586   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:33.995565   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:36.494547   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:38.495194   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:40.995653   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:42.996593   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:45.495409   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:47.496072   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:49.997645   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:52.496097   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:54.994390   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:56.995869   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:38:58.996230   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:01.495217   78489 pod_ready.go:103] pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:02.989403   78489 pod_ready.go:82] duration metric: took 4m0.001106911s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" ...
	E0816 00:39:02.989435   78489 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mm5l7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 00:39:02.989456   78489 pod_ready.go:39] duration metric: took 4m14.547419665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:02.989488   78489 kubeadm.go:597] duration metric: took 4m21.799297957s to restartPrimaryControlPlane
	W0816 00:39:02.989550   78489 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 00:39:02.989582   78489 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:39:29.166109   78489 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.176504479s)
	I0816 00:39:29.166193   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:29.188082   78489 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 00:39:29.207577   78489 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:39:29.230485   78489 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:39:29.230510   78489 kubeadm.go:157] found existing configuration files:
	
	I0816 00:39:29.230564   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:39:29.242106   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:39:29.242177   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:39:29.258756   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:39:29.272824   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:39:29.272896   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:39:29.285574   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.294909   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:39:29.294985   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:39:29.304843   78489 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:39:29.315125   78489 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:39:29.315173   78489 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:39:29.325422   78489 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:39:29.375775   78489 kubeadm.go:310] W0816 00:39:29.358885    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.376658   78489 kubeadm.go:310] W0816 00:39:29.359753    3051 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 00:39:29.504337   78489 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:39:38.219769   78489 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 00:39:38.219865   78489 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:39:38.219968   78489 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:39:38.220094   78489 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:39:38.220215   78489 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 00:39:38.220302   78489 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:39:38.221971   78489 out.go:235]   - Generating certificates and keys ...
	I0816 00:39:38.222037   78489 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:39:38.222119   78489 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:39:38.222234   78489 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:39:38.222316   78489 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:39:38.222430   78489 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:39:38.222509   78489 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:39:38.222584   78489 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:39:38.222684   78489 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:39:38.222767   78489 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:39:38.222831   78489 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:39:38.222862   78489 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:39:38.222943   78489 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:39:38.223035   78489 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:39:38.223121   78489 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 00:39:38.223212   78489 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:39:38.223299   78489 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:39:38.223355   78489 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:39:38.223452   78489 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:39:38.223534   78489 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:39:38.225012   78489 out.go:235]   - Booting up control plane ...
	I0816 00:39:38.225086   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:39:38.225153   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:39:38.225211   78489 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:39:38.225296   78489 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:39:38.225366   78489 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:39:38.225399   78489 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:39:38.225542   78489 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 00:39:38.225706   78489 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 00:39:38.225803   78489 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001324649s
	I0816 00:39:38.225917   78489 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 00:39:38.226004   78489 kubeadm.go:310] [api-check] The API server is healthy after 5.001672205s
	I0816 00:39:38.226125   78489 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 00:39:38.226267   78489 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 00:39:38.226352   78489 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 00:39:38.226537   78489 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-819398 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 00:39:38.226620   78489 kubeadm.go:310] [bootstrap-token] Using token: 4qqrpj.xeaneqftblh8gcp3
	I0816 00:39:38.227962   78489 out.go:235]   - Configuring RBAC rules ...
	I0816 00:39:38.228060   78489 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 00:39:38.228140   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 00:39:38.228290   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 00:39:38.228437   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 00:39:38.228558   78489 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 00:39:38.228697   78489 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 00:39:38.228877   78489 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 00:39:38.228942   78489 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 00:39:38.229000   78489 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 00:39:38.229010   78489 kubeadm.go:310] 
	I0816 00:39:38.229086   78489 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 00:39:38.229096   78489 kubeadm.go:310] 
	I0816 00:39:38.229160   78489 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 00:39:38.229166   78489 kubeadm.go:310] 
	I0816 00:39:38.229186   78489 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 00:39:38.229252   78489 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 00:39:38.229306   78489 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 00:39:38.229312   78489 kubeadm.go:310] 
	I0816 00:39:38.229361   78489 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 00:39:38.229367   78489 kubeadm.go:310] 
	I0816 00:39:38.229403   78489 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 00:39:38.229408   78489 kubeadm.go:310] 
	I0816 00:39:38.229447   78489 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 00:39:38.229504   78489 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 00:39:38.229562   78489 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 00:39:38.229567   78489 kubeadm.go:310] 
	I0816 00:39:38.229636   78489 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 00:39:38.229701   78489 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 00:39:38.229707   78489 kubeadm.go:310] 
	I0816 00:39:38.229793   78489 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.229925   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 \
	I0816 00:39:38.229954   78489 kubeadm.go:310] 	--control-plane 
	I0816 00:39:38.229960   78489 kubeadm.go:310] 
	I0816 00:39:38.230029   78489 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 00:39:38.230038   78489 kubeadm.go:310] 
	I0816 00:39:38.230109   78489 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qqrpj.xeaneqftblh8gcp3 \
	I0816 00:39:38.230211   78489 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cfc4cf5ef6d0a82403ca682d22bcdfb90e1d6ce4fde6ed8d87ecc45bbf9957a8 
	I0816 00:39:38.230223   78489 cni.go:84] Creating CNI manager for ""
	I0816 00:39:38.230232   78489 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0816 00:39:38.231742   78489 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 00:39:38.233079   78489 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 00:39:38.245435   78489 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 00:39:38.269502   78489 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 00:39:38.269566   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.269593   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-819398 minikube.k8s.io/updated_at=2024_08_16T00_39_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=fe9c1d9e27059a205b0df8e5e482803b65ef8774 minikube.k8s.io/name=no-preload-819398 minikube.k8s.io/primary=true
	I0816 00:39:38.304272   78489 ops.go:34] apiserver oom_adj: -16
	I0816 00:39:38.485643   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:38.986569   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.486177   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:39.985737   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.486311   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:40.985981   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.486071   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:41.986414   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.486292   78489 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 00:39:42.603092   78489 kubeadm.go:1113] duration metric: took 4.333590575s to wait for elevateKubeSystemPrivileges
	I0816 00:39:42.603133   78489 kubeadm.go:394] duration metric: took 5m1.4690157s to StartCluster
	I0816 00:39:42.603158   78489 settings.go:142] acquiring lock: {Name:mkf1f1bbcc721e1ea7417c31a3fa0ba7adc09148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.603258   78489 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:39:42.604833   78489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/kubeconfig: {Name:mk2db82f82aad660bb7e44599a558b1b46a75c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 00:39:42.605072   78489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0816 00:39:42.605133   78489 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 00:39:42.605219   78489 addons.go:69] Setting storage-provisioner=true in profile "no-preload-819398"
	I0816 00:39:42.605254   78489 addons.go:234] Setting addon storage-provisioner=true in "no-preload-819398"
	I0816 00:39:42.605251   78489 addons.go:69] Setting default-storageclass=true in profile "no-preload-819398"
	I0816 00:39:42.605259   78489 addons.go:69] Setting metrics-server=true in profile "no-preload-819398"
	I0816 00:39:42.605295   78489 config.go:182] Loaded profile config "no-preload-819398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:39:42.605308   78489 addons.go:234] Setting addon metrics-server=true in "no-preload-819398"
	I0816 00:39:42.605309   78489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-819398"
	W0816 00:39:42.605320   78489 addons.go:243] addon metrics-server should already be in state true
	W0816 00:39:42.605266   78489 addons.go:243] addon storage-provisioner should already be in state true
	I0816 00:39:42.605355   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605370   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.605697   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605717   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605731   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605735   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.605777   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.605837   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.606458   78489 out.go:177] * Verifying Kubernetes components...
	I0816 00:39:42.607740   78489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 00:39:42.622512   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0816 00:39:42.623130   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.623697   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.623720   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.624070   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.624666   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.624695   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.626221   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0816 00:39:42.626220   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0816 00:39:42.626608   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.626695   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.627158   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627179   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627329   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.627346   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.627490   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.627696   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.628049   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.628165   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.628189   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.632500   78489 addons.go:234] Setting addon default-storageclass=true in "no-preload-819398"
	W0816 00:39:42.632523   78489 addons.go:243] addon default-storageclass should already be in state true
	I0816 00:39:42.632554   78489 host.go:66] Checking if "no-preload-819398" exists ...
	I0816 00:39:42.632897   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.632928   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.644779   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0816 00:39:42.645422   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.645995   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.646026   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.646395   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.646607   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.646960   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0816 00:39:42.647374   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.648126   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.648141   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.648471   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.649494   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.649732   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.651509   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.651600   78489 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 00:39:42.652823   78489 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 00:39:42.652936   78489 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:42.652951   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 00:39:42.652970   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654197   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 00:39:42.654217   78489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 00:39:42.654234   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.654380   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0816 00:39:42.654812   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.655316   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.655332   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.655784   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.656330   78489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0816 00:39:42.656356   78489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0816 00:39:42.659148   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659319   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659629   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659648   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659776   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.659794   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.659959   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660138   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.660164   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660330   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.660444   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660478   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.660587   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.660583   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.674431   78489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0816 00:39:42.674827   78489 main.go:141] libmachine: () Calling .GetVersion
	I0816 00:39:42.675399   78489 main.go:141] libmachine: Using API Version  1
	I0816 00:39:42.675420   78489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 00:39:42.675756   78489 main.go:141] libmachine: () Calling .GetMachineName
	I0816 00:39:42.675993   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetState
	I0816 00:39:42.677956   78489 main.go:141] libmachine: (no-preload-819398) Calling .DriverName
	I0816 00:39:42.678195   78489 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:42.678211   78489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 00:39:42.678230   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHHostname
	I0816 00:39:42.681163   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681593   78489 main.go:141] libmachine: (no-preload-819398) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:2c", ip: ""} in network mk-no-preload-819398: {Iface:virbr4 ExpiryTime:2024-08-16 01:34:16 +0000 UTC Type:0 Mac:52:54:00:ee:9f:2c Iaid: IPaddr:192.168.61.15 Prefix:24 Hostname:no-preload-819398 Clientid:01:52:54:00:ee:9f:2c}
	I0816 00:39:42.681615   78489 main.go:141] libmachine: (no-preload-819398) DBG | domain no-preload-819398 has defined IP address 192.168.61.15 and MAC address 52:54:00:ee:9f:2c in network mk-no-preload-819398
	I0816 00:39:42.681916   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHPort
	I0816 00:39:42.682099   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHKeyPath
	I0816 00:39:42.682197   78489 main.go:141] libmachine: (no-preload-819398) Calling .GetSSHUsername
	I0816 00:39:42.682276   78489 sshutil.go:53] new ssh client: &{IP:192.168.61.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/no-preload-819398/id_rsa Username:docker}
	I0816 00:39:42.822056   78489 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 00:39:42.840356   78489 node_ready.go:35] waiting up to 6m0s for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852864   78489 node_ready.go:49] node "no-preload-819398" has status "Ready":"True"
	I0816 00:39:42.852887   78489 node_ready.go:38] duration metric: took 12.497677ms for node "no-preload-819398" to be "Ready" ...
	I0816 00:39:42.852899   78489 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:42.866637   78489 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:42.908814   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 00:39:42.908832   78489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 00:39:42.949047   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 00:39:42.949070   78489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 00:39:42.959159   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 00:39:43.021536   78489 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.021557   78489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 00:39:43.068214   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 00:39:43.082144   78489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 00:39:43.243834   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.243857   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244177   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244192   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.244201   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.244212   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.244451   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.244505   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:43.250358   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:43.250376   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:43.250608   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:43.250648   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:43.250656   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419115   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.350866587s)
	I0816 00:39:44.419166   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419175   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419519   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419545   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.419542   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419561   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.419573   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.419824   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.419836   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.419851   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.436623   78489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.354435707s)
	I0816 00:39:44.436682   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.436697   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437131   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437150   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437160   78489 main.go:141] libmachine: Making call to close driver server
	I0816 00:39:44.437169   78489 main.go:141] libmachine: (no-preload-819398) Calling .Close
	I0816 00:39:44.437207   78489 main.go:141] libmachine: (no-preload-819398) DBG | Closing plugin on server side
	I0816 00:39:44.437495   78489 main.go:141] libmachine: Successfully made call to close driver server
	I0816 00:39:44.437517   78489 main.go:141] libmachine: Making call to close connection to plugin binary
	I0816 00:39:44.437528   78489 addons.go:475] Verifying addon metrics-server=true in "no-preload-819398"
	I0816 00:39:44.439622   78489 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0816 00:39:44.441097   78489 addons.go:510] duration metric: took 1.835961958s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0816 00:39:44.878479   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:47.373009   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:49.380832   78489 pod_ready.go:103] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"False"
	I0816 00:39:50.372883   78489 pod_ready.go:93] pod "etcd-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.372919   78489 pod_ready.go:82] duration metric: took 7.506242182s for pod "etcd-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.372933   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378463   78489 pod_ready.go:93] pod "kube-apiserver-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.378486   78489 pod_ready.go:82] duration metric: took 5.546402ms for pod "kube-apiserver-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.378496   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383347   78489 pod_ready.go:93] pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.383364   78489 pod_ready.go:82] duration metric: took 4.862995ms for pod "kube-controller-manager-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.383374   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387672   78489 pod_ready.go:93] pod "kube-proxy-nl7g6" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.387693   78489 pod_ready.go:82] duration metric: took 4.312811ms for pod "kube-proxy-nl7g6" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.387703   78489 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391921   78489 pod_ready.go:93] pod "kube-scheduler-no-preload-819398" in "kube-system" namespace has status "Ready":"True"
	I0816 00:39:50.391939   78489 pod_ready.go:82] duration metric: took 4.229092ms for pod "kube-scheduler-no-preload-819398" in "kube-system" namespace to be "Ready" ...
	I0816 00:39:50.391945   78489 pod_ready.go:39] duration metric: took 7.539034647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 00:39:50.391958   78489 api_server.go:52] waiting for apiserver process to appear ...
	I0816 00:39:50.392005   78489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 00:39:50.407980   78489 api_server.go:72] duration metric: took 7.802877941s to wait for apiserver process to appear ...
	I0816 00:39:50.408017   78489 api_server.go:88] waiting for apiserver healthz status ...
	I0816 00:39:50.408039   78489 api_server.go:253] Checking apiserver healthz at https://192.168.61.15:8443/healthz ...
	I0816 00:39:50.412234   78489 api_server.go:279] https://192.168.61.15:8443/healthz returned 200:
	ok
	I0816 00:39:50.413278   78489 api_server.go:141] control plane version: v1.31.0
	I0816 00:39:50.413297   78489 api_server.go:131] duration metric: took 5.273051ms to wait for apiserver health ...
	I0816 00:39:50.413304   78489 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 00:39:50.573185   78489 system_pods.go:59] 9 kube-system pods found
	I0816 00:39:50.573226   78489 system_pods.go:61] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.573233   78489 system_pods.go:61] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.573239   78489 system_pods.go:61] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.573244   78489 system_pods.go:61] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.573250   78489 system_pods.go:61] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.573257   78489 system_pods.go:61] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.573262   78489 system_pods.go:61] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.573271   78489 system_pods.go:61] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.573278   78489 system_pods.go:61] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.573288   78489 system_pods.go:74] duration metric: took 159.97729ms to wait for pod list to return data ...
	I0816 00:39:50.573301   78489 default_sa.go:34] waiting for default service account to be created ...
	I0816 00:39:50.771164   78489 default_sa.go:45] found service account: "default"
	I0816 00:39:50.771189   78489 default_sa.go:55] duration metric: took 197.881739ms for default service account to be created ...
	I0816 00:39:50.771198   78489 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 00:39:50.973415   78489 system_pods.go:86] 9 kube-system pods found
	I0816 00:39:50.973448   78489 system_pods.go:89] "coredns-6f6b679f8f-5gdv9" [4e2bb7c6-b9f2-44b2-bff1-e7c5f163c208] Running
	I0816 00:39:50.973453   78489 system_pods.go:89] "coredns-6f6b679f8f-wqr8r" [46a3f3eb-5b2c-4bca-a1c6-b33beca82a09] Running
	I0816 00:39:50.973457   78489 system_pods.go:89] "etcd-no-preload-819398" [a478f74e-e9b1-4b8d-9198-2684c02b2b71] Running
	I0816 00:39:50.973461   78489 system_pods.go:89] "kube-apiserver-no-preload-819398" [f3618893-6f46-4a0e-b603-8fc1062350b8] Running
	I0816 00:39:50.973465   78489 system_pods.go:89] "kube-controller-manager-no-preload-819398" [c5e1d73f-c3b0-44a6-a45a-d11c191e4a26] Running
	I0816 00:39:50.973468   78489 system_pods.go:89] "kube-proxy-nl7g6" [4697f7b9-3f79-451d-927e-15eb68e88eb6] Running
	I0816 00:39:50.973471   78489 system_pods.go:89] "kube-scheduler-no-preload-819398" [1243de64-d006-40a7-bd43-b0265dbef27d] Running
	I0816 00:39:50.973477   78489 system_pods.go:89] "metrics-server-6867b74b74-dz5h4" [02a73f5f-79ef-4563-81e1-afb5ad8e2e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 00:39:50.973482   78489 system_pods.go:89] "storage-provisioner" [1b813a00-5eeb-468e-8591-e3d83ddb1556] Running
	I0816 00:39:50.973491   78489 system_pods.go:126] duration metric: took 202.288008ms to wait for k8s-apps to be running ...
	I0816 00:39:50.973498   78489 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 00:39:50.973539   78489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:39:50.989562   78489 system_svc.go:56] duration metric: took 16.053781ms WaitForService to wait for kubelet
	I0816 00:39:50.989595   78489 kubeadm.go:582] duration metric: took 8.384495377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 00:39:50.989618   78489 node_conditions.go:102] verifying NodePressure condition ...
	I0816 00:39:51.171076   78489 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 00:39:51.171109   78489 node_conditions.go:123] node cpu capacity is 2
	I0816 00:39:51.171120   78489 node_conditions.go:105] duration metric: took 181.496732ms to run NodePressure ...
	I0816 00:39:51.171134   78489 start.go:241] waiting for startup goroutines ...
	I0816 00:39:51.171144   78489 start.go:246] waiting for cluster config update ...
	I0816 00:39:51.171157   78489 start.go:255] writing updated cluster config ...
	I0816 00:39:51.171465   78489 ssh_runner.go:195] Run: rm -f paused
	I0816 00:39:51.220535   78489 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 00:39:51.223233   78489 out.go:177] * Done! kubectl is now configured to use "no-preload-819398" cluster and "default" namespace by default
	I0816 00:40:18.143220   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:40:18.143333   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:40:18.144757   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.144804   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.144888   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.145018   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.145134   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:18.145210   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:18.146791   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:18.146879   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:18.146965   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:18.147072   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:18.147164   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:18.147258   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:18.147340   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:18.147434   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:18.147525   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:18.147613   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:18.147708   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:18.147744   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:18.147791   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:18.147839   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:18.147916   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:18.147989   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:18.148045   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:18.148194   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:18.148318   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:18.148365   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:18.148458   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:18.149817   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:18.149941   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:18.150044   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:18.150107   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:18.150187   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:18.150323   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:40:18.150380   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:40:18.150460   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150671   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.150766   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.150953   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151033   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151232   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151305   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151520   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151614   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:40:18.151840   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:40:18.151856   79191 kubeadm.go:310] 
	I0816 00:40:18.151917   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:40:18.151978   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:40:18.151992   79191 kubeadm.go:310] 
	I0816 00:40:18.152046   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:40:18.152097   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:40:18.152204   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:40:18.152218   79191 kubeadm.go:310] 
	I0816 00:40:18.152314   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:40:18.152349   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:40:18.152377   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:40:18.152384   79191 kubeadm.go:310] 
	I0816 00:40:18.152466   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:40:18.152537   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:40:18.152543   79191 kubeadm.go:310] 
	I0816 00:40:18.152674   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:40:18.152769   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:40:18.152853   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:40:18.152914   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:40:18.152978   79191 kubeadm.go:310] 
	W0816 00:40:18.153019   79191 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0816 00:40:18.153055   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0816 00:40:18.634058   79191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 00:40:18.648776   79191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 00:40:18.659504   79191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 00:40:18.659529   79191 kubeadm.go:157] found existing configuration files:
	
	I0816 00:40:18.659584   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 00:40:18.670234   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 00:40:18.670285   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 00:40:18.680370   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 00:40:18.689496   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 00:40:18.689557   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 00:40:18.698949   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.708056   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 00:40:18.708118   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 00:40:18.718261   79191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 00:40:18.728708   79191 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 00:40:18.728777   79191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 00:40:18.739253   79191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 00:40:18.819666   79191 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0816 00:40:18.819746   79191 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 00:40:18.966568   79191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 00:40:18.966704   79191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 00:40:18.966868   79191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 00:40:19.168323   79191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 00:40:19.170213   79191 out.go:235]   - Generating certificates and keys ...
	I0816 00:40:19.170335   79191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 00:40:19.170464   79191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 00:40:19.170546   79191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 00:40:19.170598   79191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 00:40:19.170670   79191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 00:40:19.170740   79191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 00:40:19.170828   79191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 00:40:19.170924   79191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 00:40:19.171031   79191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 00:40:19.171129   79191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 00:40:19.171179   79191 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 00:40:19.171261   79191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 00:40:19.421256   79191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 00:40:19.585260   79191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 00:40:19.672935   79191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 00:40:19.928620   79191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 00:40:19.952420   79191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 00:40:19.953527   79191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 00:40:19.953578   79191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 00:40:20.090384   79191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 00:40:20.092904   79191 out.go:235]   - Booting up control plane ...
	I0816 00:40:20.093037   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 00:40:20.105743   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 00:40:20.106980   79191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 00:40:20.108199   79191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 00:40:20.111014   79191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 00:41:00.113053   79191 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0816 00:41:00.113479   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:00.113752   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:05.113795   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:05.114091   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:15.114695   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:15.114932   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:41:35.116019   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:41:35.116207   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.116728   79191 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0816 00:42:15.116994   79191 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0816 00:42:15.117018   79191 kubeadm.go:310] 
	I0816 00:42:15.117071   79191 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0816 00:42:15.117136   79191 kubeadm.go:310] 		timed out waiting for the condition
	I0816 00:42:15.117147   79191 kubeadm.go:310] 
	I0816 00:42:15.117198   79191 kubeadm.go:310] 	This error is likely caused by:
	I0816 00:42:15.117248   79191 kubeadm.go:310] 		- The kubelet is not running
	I0816 00:42:15.117402   79191 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0816 00:42:15.117412   79191 kubeadm.go:310] 
	I0816 00:42:15.117543   79191 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0816 00:42:15.117601   79191 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0816 00:42:15.117636   79191 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0816 00:42:15.117644   79191 kubeadm.go:310] 
	I0816 00:42:15.117778   79191 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0816 00:42:15.117918   79191 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0816 00:42:15.117929   79191 kubeadm.go:310] 
	I0816 00:42:15.118083   79191 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0816 00:42:15.118215   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0816 00:42:15.118313   79191 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0816 00:42:15.118412   79191 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0816 00:42:15.118433   79191 kubeadm.go:310] 
	I0816 00:42:15.118582   79191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 00:42:15.118698   79191 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0816 00:42:15.118843   79191 kubeadm.go:394] duration metric: took 8m2.460648867s to StartCluster
	I0816 00:42:15.118855   79191 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0816 00:42:15.118891   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0816 00:42:15.118957   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 00:42:15.162809   79191 cri.go:89] found id: ""
	I0816 00:42:15.162837   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.162848   79191 logs.go:278] No container was found matching "kube-apiserver"
	I0816 00:42:15.162855   79191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0816 00:42:15.162925   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 00:42:15.198020   79191 cri.go:89] found id: ""
	I0816 00:42:15.198042   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.198053   79191 logs.go:278] No container was found matching "etcd"
	I0816 00:42:15.198063   79191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0816 00:42:15.198132   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 00:42:15.238168   79191 cri.go:89] found id: ""
	I0816 00:42:15.238197   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.238206   79191 logs.go:278] No container was found matching "coredns"
	I0816 00:42:15.238213   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0816 00:42:15.238273   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 00:42:15.278364   79191 cri.go:89] found id: ""
	I0816 00:42:15.278391   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.278401   79191 logs.go:278] No container was found matching "kube-scheduler"
	I0816 00:42:15.278407   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0816 00:42:15.278465   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 00:42:15.316182   79191 cri.go:89] found id: ""
	I0816 00:42:15.316209   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.316216   79191 logs.go:278] No container was found matching "kube-proxy"
	I0816 00:42:15.316222   79191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 00:42:15.316278   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 00:42:15.352934   79191 cri.go:89] found id: ""
	I0816 00:42:15.352962   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.352970   79191 logs.go:278] No container was found matching "kube-controller-manager"
	I0816 00:42:15.352976   79191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0816 00:42:15.353031   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 00:42:15.388940   79191 cri.go:89] found id: ""
	I0816 00:42:15.388966   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.388973   79191 logs.go:278] No container was found matching "kindnet"
	I0816 00:42:15.388983   79191 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 00:42:15.389042   79191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 00:42:15.424006   79191 cri.go:89] found id: ""
	I0816 00:42:15.424035   79191 logs.go:276] 0 containers: []
	W0816 00:42:15.424043   79191 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0816 00:42:15.424054   79191 logs.go:123] Gathering logs for describe nodes ...
	I0816 00:42:15.424073   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0816 00:42:15.504823   79191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0816 00:42:15.504846   79191 logs.go:123] Gathering logs for CRI-O ...
	I0816 00:42:15.504858   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0816 00:42:15.608927   79191 logs.go:123] Gathering logs for container status ...
	I0816 00:42:15.608959   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 00:42:15.676785   79191 logs.go:123] Gathering logs for kubelet ...
	I0816 00:42:15.676810   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 00:42:15.744763   79191 logs.go:123] Gathering logs for dmesg ...
	I0816 00:42:15.744805   79191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0816 00:42:15.760944   79191 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0816 00:42:15.761012   79191 out.go:270] * 
	W0816 00:42:15.761078   79191 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.761098   79191 out.go:270] * 
	W0816 00:42:15.762220   79191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 00:42:15.765697   79191 out.go:201] 
	W0816 00:42:15.766942   79191 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0816 00:42:15.767018   79191 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0816 00:42:15.767040   79191 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0816 00:42:15.768526   79191 out.go:201] 
	
	
	==> CRI-O <==
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.910111642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769577910088246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6338eb2-5e89-4035-aca0-af849b2c49af name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.910649708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61902bc5-96e5-48d3-8d7e-a7ca17b03bc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.910715282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61902bc5-96e5-48d3-8d7e-a7ca17b03bc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.910752719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=61902bc5-96e5-48d3-8d7e-a7ca17b03bc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.944476867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84722b27-d8b8-46cc-90b6-125f9edb32bb name=/runtime.v1.RuntimeService/Version
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.944571204Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84722b27-d8b8-46cc-90b6-125f9edb32bb name=/runtime.v1.RuntimeService/Version
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.945834250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08097ef5-3991-48a9-b127-e9e419b6c3c1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.946206691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769577946187006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08097ef5-3991-48a9-b127-e9e419b6c3c1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.946957677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06ae93f1-0bd0-4f0b-a2ff-aa229e1a5d63 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.947023609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06ae93f1-0bd0-4f0b-a2ff-aa229e1a5d63 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.947061000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=06ae93f1-0bd0-4f0b-a2ff-aa229e1a5d63 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.979799827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59e9d8c2-b42d-432d-b004-11984d023067 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.979869595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59e9d8c2-b42d-432d-b004-11984d023067 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.981505220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9472837-332b-455b-a44c-53ce04c0b3a9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.981894445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769577981868333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9472837-332b-455b-a44c-53ce04c0b3a9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.982470687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99f6bc88-f5ab-4b42-814e-684c8548f230 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.982536701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99f6bc88-f5ab-4b42-814e-684c8548f230 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:57 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:57.982604624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=99f6bc88-f5ab-4b42-814e-684c8548f230 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:58 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:58.014938175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7a10edb-588d-4eb7-8254-e2722e2278e3 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:52:58 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:58.015052590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7a10edb-588d-4eb7-8254-e2722e2278e3 name=/runtime.v1.RuntimeService/Version
	Aug 16 00:52:58 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:58.016046819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66938159-2f86-494e-8544-348d902ad969 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:52:58 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:58.016497863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723769578016407903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66938159-2f86-494e-8544-348d902ad969 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 16 00:52:58 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:58.017013370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0c5a559-0c9e-4b31-8e80-5b9bf13ce735 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:58 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:58.017085216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0c5a559-0c9e-4b31-8e80-5b9bf13ce735 name=/runtime.v1.RuntimeService/ListContainers
	Aug 16 00:52:58 old-k8s-version-098619 crio[650]: time="2024-08-16 00:52:58.017119822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b0c5a559-0c9e-4b31-8e80-5b9bf13ce735 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug16 00:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055820] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042316] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.997792] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.610931] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.386268] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug16 00:34] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.149906] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.218773] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.113453] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.292715] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.582198] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.063869] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.975940] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	[ +13.278959] kauditd_printk_skb: 46 callbacks suppressed
	[Aug16 00:38] systemd-fstab-generator[5083]: Ignoring "noauto" option for root device
	[Aug16 00:40] systemd-fstab-generator[5363]: Ignoring "noauto" option for root device
	[  +0.062259] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:52:58 up 19 min,  0 users,  load average: 0.21, 0.10, 0.06
	Linux old-k8s-version-098619 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]: net.(*sysDialer).dialSerial(0xc000c61780, 0x4f7fe40, 0xc000a22780, 0xc000893890, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]:         /usr/local/go/src/net/dial.go:548 +0x152
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]: net.(*Dialer).DialContext(0xc0003b7260, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009376b0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00060e2e0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009376b0, 0x24, 0x60, 0x7f48ec409ff8, 0x118, ...)
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]: net/http.(*Transport).dial(0xc000a6c000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009376b0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]: net/http.(*Transport).dialConn(0xc000a6c000, 0x4f7fe00, 0xc000052030, 0x0, 0xc00037c540, 0x5, 0xc0009376b0, 0x24, 0x0, 0xc000a02360, ...)
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]: net/http.(*Transport).dialConnFor(0xc000a6c000, 0xc0008a94a0)
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]: created by net/http.(*Transport).queueForDial
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6767]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 16 00:52:54 old-k8s-version-098619 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 16 00:52:54 old-k8s-version-098619 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 16 00:52:54 old-k8s-version-098619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 131.
	Aug 16 00:52:54 old-k8s-version-098619 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 16 00:52:54 old-k8s-version-098619 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6776]: I0816 00:52:54.937337    6776 server.go:416] Version: v1.20.0
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6776]: I0816 00:52:54.937751    6776 server.go:837] Client rotation is on, will bootstrap in background
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6776]: I0816 00:52:54.941270    6776 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6776]: W0816 00:52:54.943028    6776 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 16 00:52:54 old-k8s-version-098619 kubelet[6776]: I0816 00:52:54.944193    6776 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 2 (244.021636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-098619" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (96.78s)

                                                
                                    

Test pass (251/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.63
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 5.11
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 114.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 128.46
31 TestAddons/serial/GCPAuth/Namespaces 1.81
33 TestAddons/parallel/Registry 17.08
35 TestAddons/parallel/InspektorGadget 12.01
37 TestAddons/parallel/HelmTiller 10.12
39 TestAddons/parallel/CSI 58.88
40 TestAddons/parallel/Headlamp 12.19
41 TestAddons/parallel/CloudSpanner 6.52
42 TestAddons/parallel/LocalPath 54.15
43 TestAddons/parallel/NvidiaDevicePlugin 6.57
44 TestAddons/parallel/Yakd 10.88
46 TestCertOptions 90.06
47 TestCertExpiration 325.42
49 TestForceSystemdFlag 74.49
50 TestForceSystemdEnv 47.37
52 TestKVMDriverInstallOrUpdate 1.36
56 TestErrorSpam/setup 43.93
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.73
59 TestErrorSpam/pause 1.61
60 TestErrorSpam/unpause 1.73
61 TestErrorSpam/stop 6.19
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 89.56
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.24
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.35
73 TestFunctional/serial/CacheCmd/cache/add_local 1.11
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 35.17
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.47
84 TestFunctional/serial/LogsFileCmd 1.42
85 TestFunctional/serial/InvalidService 4.27
87 TestFunctional/parallel/ConfigCmd 0.3
88 TestFunctional/parallel/DashboardCmd 15.09
89 TestFunctional/parallel/DryRun 0.25
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.79
95 TestFunctional/parallel/ServiceCmdConnect 25.54
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 44.08
99 TestFunctional/parallel/SSHCmd 0.36
100 TestFunctional/parallel/CpCmd 1.56
101 TestFunctional/parallel/MySQL 23.13
102 TestFunctional/parallel/FileSync 0.22
103 TestFunctional/parallel/CertSync 1.33
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
111 TestFunctional/parallel/License 0.17
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.64
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.44
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.42
119 TestFunctional/parallel/ImageCommands/Setup 0.46
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.58
134 TestFunctional/parallel/ProfileCmd/profile_list 0.28
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.04
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.36
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.85
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6.08
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.94
142 TestFunctional/parallel/ServiceCmd/DeployApp 9.28
143 TestFunctional/parallel/MountCmd/any-port 7.67
144 TestFunctional/parallel/ServiceCmd/List 0.43
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
147 TestFunctional/parallel/ServiceCmd/Format 0.53
148 TestFunctional/parallel/ServiceCmd/URL 0.29
149 TestFunctional/parallel/MountCmd/specific-port 2.05
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 192.45
158 TestMultiControlPlane/serial/DeployApp 5.98
159 TestMultiControlPlane/serial/PingHostFromPods 1.24
160 TestMultiControlPlane/serial/AddWorkerNode 56.11
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
163 TestMultiControlPlane/serial/CopyFile 12.62
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.37
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 487.95
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
174 TestMultiControlPlane/serial/AddSecondaryNode 72.79
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 84.71
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.72
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.33
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 88.98
211 TestMountStart/serial/StartWithMountFirst 27.16
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 31.9
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 1.04
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 20.39
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 111.9
223 TestMultiNode/serial/DeployApp2Nodes 4.86
224 TestMultiNode/serial/PingHostFrom2Pods 0.78
225 TestMultiNode/serial/AddNode 50.63
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.98
229 TestMultiNode/serial/StopNode 2.31
230 TestMultiNode/serial/StartAfterStop 37.7
232 TestMultiNode/serial/DeleteNode 2.36
234 TestMultiNode/serial/RestartMultiNode 195.26
235 TestMultiNode/serial/ValidateNameConflict 43.42
242 TestScheduledStopUnix 118.08
246 TestRunningBinaryUpgrade 201.49
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 92.87
260 TestNetworkPlugins/group/false 2.79
264 TestNoKubernetes/serial/StartWithStopK8s 38.66
265 TestStoppedBinaryUpgrade/Setup 0.49
266 TestStoppedBinaryUpgrade/Upgrade 108.28
267 TestNoKubernetes/serial/Start 45.02
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
269 TestNoKubernetes/serial/ProfileList 28.99
270 TestNoKubernetes/serial/Stop 2.38
271 TestNoKubernetes/serial/StartNoArgs 23.16
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
282 TestPause/serial/Start 91.45
283 TestNetworkPlugins/group/auto/Start 96.55
284 TestPause/serial/SecondStartNoReconfiguration 40.78
285 TestNetworkPlugins/group/auto/KubeletFlags 0.21
286 TestNetworkPlugins/group/auto/NetCatPod 11.24
287 TestNetworkPlugins/group/kindnet/Start 63.38
288 TestPause/serial/Pause 0.74
289 TestNetworkPlugins/group/auto/DNS 0.19
290 TestNetworkPlugins/group/auto/Localhost 0.15
291 TestPause/serial/VerifyStatus 0.27
292 TestNetworkPlugins/group/auto/HairPin 0.16
293 TestPause/serial/Unpause 0.71
294 TestPause/serial/PauseAgain 0.85
295 TestPause/serial/DeletePaused 1.03
296 TestPause/serial/VerifyDeletedResources 0.43
297 TestNetworkPlugins/group/calico/Start 89.85
298 TestNetworkPlugins/group/custom-flannel/Start 96.53
299 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
300 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
301 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
302 TestNetworkPlugins/group/kindnet/DNS 0.21
303 TestNetworkPlugins/group/kindnet/Localhost 0.16
304 TestNetworkPlugins/group/kindnet/HairPin 0.16
305 TestNetworkPlugins/group/enable-default-cni/Start 90.65
306 TestNetworkPlugins/group/calico/ControllerPod 6.01
307 TestNetworkPlugins/group/calico/KubeletFlags 0.22
308 TestNetworkPlugins/group/calico/NetCatPod 11.29
309 TestNetworkPlugins/group/flannel/Start 82.83
310 TestNetworkPlugins/group/calico/DNS 0.21
311 TestNetworkPlugins/group/calico/Localhost 0.13
312 TestNetworkPlugins/group/calico/HairPin 0.13
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
315 TestNetworkPlugins/group/custom-flannel/DNS 0.27
316 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
317 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
318 TestNetworkPlugins/group/bridge/Start 95.65
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
323 TestNetworkPlugins/group/flannel/ControllerPod 6.01
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
328 TestNetworkPlugins/group/flannel/NetCatPod 11.23
329 TestNetworkPlugins/group/flannel/DNS 0.2
330 TestNetworkPlugins/group/flannel/Localhost 0.21
331 TestNetworkPlugins/group/flannel/HairPin 0.15
333 TestStartStop/group/no-preload/serial/FirstStart 102.64
335 TestStartStop/group/embed-certs/serial/FirstStart 101.32
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
337 TestNetworkPlugins/group/bridge/NetCatPod 10.23
338 TestNetworkPlugins/group/bridge/DNS 0.18
339 TestNetworkPlugins/group/bridge/Localhost 0.15
340 TestNetworkPlugins/group/bridge/HairPin 0.18
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.54
343 TestStartStop/group/no-preload/serial/DeployApp 7.31
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
347 TestStartStop/group/embed-certs/serial/DeployApp 8.3
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
356 TestStartStop/group/no-preload/serial/SecondStart 684.65
358 TestStartStop/group/embed-certs/serial/SecondStart 567.91
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 583.77
360 TestStartStop/group/old-k8s-version/serial/Stop 4.28
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/newest-cni/serial/FirstStart 50.64
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
375 TestStartStop/group/newest-cni/serial/Stop 7.33
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
377 TestStartStop/group/newest-cni/serial/SecondStart 37.28
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
381 TestStartStop/group/newest-cni/serial/Pause 2.3
x
+
TestDownloadOnly/v1.20.0/json-events (8.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-218888 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-218888 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.625305331s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-218888
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-218888: exit status 85 (55.609084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-218888 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |          |
	|         | -p download-only-218888        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:05:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:05:25.489452   20090 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:05:25.489725   20090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:05:25.489735   20090 out.go:358] Setting ErrFile to fd 2...
	I0815 23:05:25.489740   20090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:05:25.489973   20090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	W0815 23:05:25.490135   20090 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19452-12919/.minikube/config/config.json: open /home/jenkins/minikube-integration/19452-12919/.minikube/config/config.json: no such file or directory
	I0815 23:05:25.490751   20090 out.go:352] Setting JSON to true
	I0815 23:05:25.491625   20090 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2825,"bootTime":1723760300,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:05:25.491687   20090 start.go:139] virtualization: kvm guest
	I0815 23:05:25.494088   20090 out.go:97] [download-only-218888] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0815 23:05:25.494190   20090 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 23:05:25.494244   20090 notify.go:220] Checking for updates...
	I0815 23:05:25.495637   20090 out.go:169] MINIKUBE_LOCATION=19452
	I0815 23:05:25.496944   20090 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:05:25.498297   20090 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:05:25.499505   20090 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:05:25.500755   20090 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 23:05:25.502968   20090 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 23:05:25.503198   20090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:05:25.599122   20090 out.go:97] Using the kvm2 driver based on user configuration
	I0815 23:05:25.599147   20090 start.go:297] selected driver: kvm2
	I0815 23:05:25.599162   20090 start.go:901] validating driver "kvm2" against <nil>
	I0815 23:05:25.599495   20090 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:05:25.599626   20090 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:05:25.614505   20090 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:05:25.614568   20090 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 23:05:25.615079   20090 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0815 23:05:25.615245   20090 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 23:05:25.615278   20090 cni.go:84] Creating CNI manager for ""
	I0815 23:05:25.615292   20090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 23:05:25.615305   20090 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 23:05:25.615370   20090 start.go:340] cluster config:
	{Name:download-only-218888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-218888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:05:25.615600   20090 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:05:25.617622   20090 out.go:97] Downloading VM boot image ...
	I0815 23:05:25.617660   20090 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19452-12919/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0815 23:05:28.607729   20090 out.go:97] Starting "download-only-218888" primary control-plane node in "download-only-218888" cluster
	I0815 23:05:28.607756   20090 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 23:05:28.639619   20090 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:05:28.639651   20090 cache.go:56] Caching tarball of preloaded images
	I0815 23:05:28.639830   20090 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 23:05:28.641454   20090 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 23:05:28.641469   20090 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 23:05:28.671672   20090 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-218888 host does not exist
	  To start a cluster, run: "minikube start -p download-only-218888"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-218888
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-195850 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-195850 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.11191633s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-195850
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-195850: exit status 85 (56.850625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-218888 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |                     |
	|         | -p download-only-218888        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
	| delete  | -p download-only-218888        | download-only-218888 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC | 15 Aug 24 23:05 UTC |
	| start   | -o=json --download-only        | download-only-195850 | jenkins | v1.33.1 | 15 Aug 24 23:05 UTC |                     |
	|         | -p download-only-195850        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 23:05:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 23:05:34.428506   20300 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:05:34.428629   20300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:05:34.428638   20300 out.go:358] Setting ErrFile to fd 2...
	I0815 23:05:34.428645   20300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:05:34.428834   20300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:05:34.429389   20300 out.go:352] Setting JSON to true
	I0815 23:05:34.430260   20300 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2834,"bootTime":1723760300,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:05:34.430318   20300 start.go:139] virtualization: kvm guest
	I0815 23:05:34.432159   20300 out.go:97] [download-only-195850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:05:34.432350   20300 notify.go:220] Checking for updates...
	I0815 23:05:34.433822   20300 out.go:169] MINIKUBE_LOCATION=19452
	I0815 23:05:34.435347   20300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:05:34.436737   20300 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:05:34.438107   20300 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:05:34.439349   20300 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 23:05:34.441808   20300 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 23:05:34.442054   20300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:05:34.475365   20300 out.go:97] Using the kvm2 driver based on user configuration
	I0815 23:05:34.475406   20300 start.go:297] selected driver: kvm2
	I0815 23:05:34.475414   20300 start.go:901] validating driver "kvm2" against <nil>
	I0815 23:05:34.475791   20300 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:05:34.475880   20300 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19452-12919/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 23:05:34.491386   20300 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 23:05:34.491438   20300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 23:05:34.491915   20300 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0815 23:05:34.492044   20300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 23:05:34.492101   20300 cni.go:84] Creating CNI manager for ""
	I0815 23:05:34.492113   20300 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 23:05:34.492121   20300 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 23:05:34.492165   20300 start.go:340] cluster config:
	{Name:download-only-195850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-195850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:05:34.492250   20300 iso.go:125] acquiring lock: {Name:mk18de6493e4b29cb1a03fa462b2de44693c337e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 23:05:34.494015   20300 out.go:97] Starting "download-only-195850" primary control-plane node in "download-only-195850" cluster
	I0815 23:05:34.494047   20300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:05:34.516368   20300 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:05:34.516398   20300 cache.go:56] Caching tarball of preloaded images
	I0815 23:05:34.516537   20300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:05:34.518241   20300 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 23:05:34.518257   20300 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 23:05:34.549836   20300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 23:05:38.299615   20300 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 23:05:38.299728   20300 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19452-12919/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 23:05:39.038133   20300 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 23:05:39.038456   20300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/download-only-195850/config.json ...
	I0815 23:05:39.038485   20300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/download-only-195850/config.json: {Name:mk4cc2e1a408c1c0973385212a2c709f9d33e4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 23:05:39.038633   20300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 23:05:39.038775   20300 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19452-12919/.minikube/cache/linux/amd64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-195850 host does not exist
	  To start a cluster, run: "minikube start -p download-only-195850"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-195850
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-071536 --alsologtostderr --binary-mirror http://127.0.0.1:39393 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-071536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-071536
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (114.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-116258 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-116258 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m53.343369822s)
helpers_test.go:175: Cleaning up "offline-crio-116258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-116258
--- PASS: TestOffline (114.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-517040
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-517040: exit status 85 (50.279223ms)

                                                
                                                
-- stdout --
	* Profile "addons-517040" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-517040"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-517040
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-517040: exit status 85 (48.967144ms)

                                                
                                                
-- stdout --
	* Profile "addons-517040" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-517040"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (128.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-517040 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-517040 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m8.459499183s)
--- PASS: TestAddons/Setup (128.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-517040 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-517040 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-517040 get secret gcp-auth -n new-namespace: exit status 1 (83.841928ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-517040 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-517040 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.81s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.538412ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-g5m9x" [3fa1cd07-9f55-41bb-85a9-a958de7f5cbf] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003000305s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h2mkz" [22fe5d24-ea50-43c5-a4bf-ee443e253852] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004104491s
addons_test.go:342: (dbg) Run:  kubectl --context addons-517040 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-517040 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-517040 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.986247445s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 ip
2024/08/15 23:08:35 [DEBUG] GET http://192.168.39.72:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ksrcc" [07303972-feaf-41ec-bea0-356c25fe0995] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004717364s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-517040
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-517040: (6.003627472s)
--- PASS: TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.12s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.021455ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-frmxp" [662d1936-5dbb-49d3-a200-0d9f9d807bfe] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004739367s
addons_test.go:475: (dbg) Run:  kubectl --context addons-517040 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-517040 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.527844443s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.12s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.481059ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-517040 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-517040 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e4829aea-c5b6-4c67-81aa-1448d60e7ce8] Pending
helpers_test.go:344: "task-pv-pod" [e4829aea-c5b6-4c67-81aa-1448d60e7ce8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e4829aea-c5b6-4c67-81aa-1448d60e7ce8] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004138426s
addons_test.go:590: (dbg) Run:  kubectl --context addons-517040 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-517040 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-517040 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-517040 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-517040 delete pod task-pv-pod: (1.099563253s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-517040 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-517040 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-517040 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bbd45653-abbf-43cf-b025-59efdca5a8e1] Pending
helpers_test.go:344: "task-pv-pod-restore" [bbd45653-abbf-43cf-b025-59efdca5a8e1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bbd45653-abbf-43cf-b025-59efdca5a8e1] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003963862s
addons_test.go:632: (dbg) Run:  kubectl --context addons-517040 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-517040 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-517040 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-517040 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.795126232s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-517040 addons disable volumesnapshots --alsologtostderr -v=1: (1.019667357s)
--- PASS: TestAddons/parallel/CSI (58.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-517040 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-lw8lr" [81da26ef-ec50-4d25-9e68-5daf93bbc089] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-lw8lr" [81da26ef-ec50-4d25-9e68-5daf93bbc089] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-lw8lr" [81da26ef-ec50-4d25-9e68-5daf93bbc089] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004460287s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-2r7qf" [268d9ac7-5581-4961-b57b-1e629be10b0f] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003836201s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-517040
--- PASS: TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-517040 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-517040 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-517040 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d55ca295-9395-4869-8121-8b25fbf297b8] Pending
helpers_test.go:344: "test-local-path" [d55ca295-9395-4869-8121-8b25fbf297b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d55ca295-9395-4869-8121-8b25fbf297b8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d55ca295-9395-4869-8121-8b25fbf297b8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004082143s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-517040 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 ssh "cat /opt/local-path-provisioner/pvc-e577ed7e-383c-4543-b504-630414b64b8d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-517040 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-517040 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-517040 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.269668785s)
--- PASS: TestAddons/parallel/LocalPath (54.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-62jx9" [e1e1e2d3-eb2b-497d-9a69-d33c5428ad96] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004270428s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-517040
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-8zmzf" [baa04b19-4ce2-46f0-b43c-c63a77a13476] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004688327s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-517040 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-517040 addons disable yakd --alsologtostderr -v=1: (5.869741525s)
--- PASS: TestAddons/parallel/Yakd (10.88s)

                                                
                                    
x
+
TestCertOptions (90.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-798942 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0816 00:17:51.161080   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-798942 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m28.851100942s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-798942 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-798942 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-798942 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-798942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-798942
--- PASS: TestCertOptions (90.06s)

                                                
                                    
x
+
TestCertExpiration (325.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-057647 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-057647 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m2.704882713s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-057647 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-057647 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m21.793100391s)
helpers_test.go:175: Cleaning up "cert-expiration-057647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-057647
--- PASS: TestCertExpiration (325.42s)

                                                
                                    
x
+
TestForceSystemdFlag (74.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-771420 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-771420 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.312475082s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-771420 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-771420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-771420
--- PASS: TestForceSystemdFlag (74.49s)

                                                
                                    
x
+
TestForceSystemdEnv (47.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-222534 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-222534 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.360157999s)
helpers_test.go:175: Cleaning up "force-systemd-env-222534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-222534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-222534: (1.007485749s)
--- PASS: TestForceSystemdEnv (47.37s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.36s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.36s)

                                                
                                    
x
+
TestErrorSpam/setup (43.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-617832 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-617832 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-617832 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-617832 --driver=kvm2  --container-runtime=crio: (43.931845369s)
--- PASS: TestErrorSpam/setup (43.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (6.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 stop: (2.296470489s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 stop: (1.877188953s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-617832 --log_dir /tmp/nospam-617832 stop: (2.016718057s)
--- PASS: TestErrorSpam/stop (6.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19452-12919/.minikube/files/etc/test/nested/copy/20078/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-629421 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0815 23:17:51.160225   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:51.167220   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:51.178557   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:51.199934   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:51.241356   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:51.322860   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:51.484305   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:51.805948   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:52.448008   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:53.729628   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:17:56.291826   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:18:01.413555   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:18:11.655607   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-629421 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m29.560341695s)
--- PASS: TestFunctional/serial/StartWithProxy (89.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-629421 --alsologtostderr -v=8
E0815 23:18:32.137821   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-629421 --alsologtostderr -v=8: (38.240632001s)
functional_test.go:663: soft start took 38.241367492s for "functional-629421" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-629421 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 cache add registry.k8s.io/pause:3.1: (1.067701766s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 cache add registry.k8s.io/pause:3.3: (1.161897962s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 cache add registry.k8s.io/pause:latest: (1.116739099s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-629421 /tmp/TestFunctionalserialCacheCmdcacheadd_local2670060247/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cache add minikube-local-cache-test:functional-629421
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cache delete minikube-local-cache-test:functional-629421
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-629421
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.908891ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 kubectl -- --context functional-629421 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-629421 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-629421 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0815 23:19:13.100501   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-629421 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.166812334s)
functional_test.go:761: restart took 35.166930585s for "functional-629421" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-629421 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 logs: (1.466617144s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 logs --file /tmp/TestFunctionalserialLogsFileCmd1431729419/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 logs --file /tmp/TestFunctionalserialLogsFileCmd1431729419/001/logs.txt: (1.420444579s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-629421 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-629421
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-629421: exit status 115 (271.564072ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.103:30313 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-629421 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 config get cpus: exit status 14 (44.327764ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 config get cpus: exit status 14 (48.922719ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-629421 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-629421 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 29964: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-629421 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-629421 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (126.417076ms)

                                                
                                                
-- stdout --
	* [functional-629421] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:20:18.575199   29204 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:20:18.575443   29204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:20:18.575453   29204 out.go:358] Setting ErrFile to fd 2...
	I0815 23:20:18.575458   29204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:20:18.575629   29204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:20:18.576135   29204 out.go:352] Setting JSON to false
	I0815 23:20:18.576998   29204 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3719,"bootTime":1723760300,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:20:18.577060   29204 start.go:139] virtualization: kvm guest
	I0815 23:20:18.579272   29204 out.go:177] * [functional-629421] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 23:20:18.580724   29204 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:20:18.580781   29204 notify.go:220] Checking for updates...
	I0815 23:20:18.583475   29204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:20:18.584863   29204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:20:18.586300   29204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:20:18.587749   29204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:20:18.589143   29204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:20:18.591079   29204 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:20:18.591678   29204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:20:18.591775   29204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:20:18.607302   29204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42267
	I0815 23:20:18.607697   29204 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:20:18.608168   29204 main.go:141] libmachine: Using API Version  1
	I0815 23:20:18.608187   29204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:20:18.608498   29204 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:20:18.608641   29204 main.go:141] libmachine: (functional-629421) Calling .DriverName
	I0815 23:20:18.608858   29204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:20:18.609134   29204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:20:18.609164   29204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:20:18.623618   29204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33165
	I0815 23:20:18.624086   29204 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:20:18.624472   29204 main.go:141] libmachine: Using API Version  1
	I0815 23:20:18.624492   29204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:20:18.624805   29204 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:20:18.624956   29204 main.go:141] libmachine: (functional-629421) Calling .DriverName
	I0815 23:20:18.656744   29204 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 23:20:18.658002   29204 start.go:297] selected driver: kvm2
	I0815 23:20:18.658021   29204 start.go:901] validating driver "kvm2" against &{Name:functional-629421 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-629421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:20:18.658141   29204 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:20:18.660067   29204 out.go:201] 
	W0815 23:20:18.661239   29204 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 23:20:18.662378   29204 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-629421 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-629421 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-629421 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (128.376534ms)

                                                
                                                
-- stdout --
	* [functional-629421] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:20:18.449238   29177 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:20:18.449490   29177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:20:18.449499   29177 out.go:358] Setting ErrFile to fd 2...
	I0815 23:20:18.449504   29177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:20:18.449769   29177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:20:18.450293   29177 out.go:352] Setting JSON to false
	I0815 23:20:18.451230   29177 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3718,"bootTime":1723760300,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 23:20:18.451287   29177 start.go:139] virtualization: kvm guest
	I0815 23:20:18.453498   29177 out.go:177] * [functional-629421] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0815 23:20:18.455168   29177 out.go:177]   - MINIKUBE_LOCATION=19452
	I0815 23:20:18.455223   29177 notify.go:220] Checking for updates...
	I0815 23:20:18.457549   29177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 23:20:18.458801   29177 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0815 23:20:18.460071   29177 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0815 23:20:18.461299   29177 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 23:20:18.462588   29177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 23:20:18.464529   29177 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:20:18.465122   29177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:20:18.465185   29177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:20:18.479771   29177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0815 23:20:18.480126   29177 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:20:18.480617   29177 main.go:141] libmachine: Using API Version  1
	I0815 23:20:18.480644   29177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:20:18.480970   29177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:20:18.481146   29177 main.go:141] libmachine: (functional-629421) Calling .DriverName
	I0815 23:20:18.481376   29177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 23:20:18.481675   29177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:20:18.481713   29177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:20:18.495918   29177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0815 23:20:18.496279   29177 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:20:18.496735   29177 main.go:141] libmachine: Using API Version  1
	I0815 23:20:18.496760   29177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:20:18.497019   29177 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:20:18.497192   29177 main.go:141] libmachine: (functional-629421) Calling .DriverName
	I0815 23:20:18.528722   29177 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0815 23:20:18.529999   29177 start.go:297] selected driver: kvm2
	I0815 23:20:18.530017   29177 start.go:901] validating driver "kvm2" against &{Name:functional-629421 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-629421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 23:20:18.530106   29177 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 23:20:18.532015   29177 out.go:201] 
	W0815 23:20:18.533329   29177 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 23:20:18.535155   29177 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (25.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-629421 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-629421 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-r2l2t" [b929ccbf-9e15-4705-9a46-b404c4c63dae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-r2l2t" [b929ccbf-9e15-4705-9a46-b404c4c63dae] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.003680784s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.103:31420
functional_test.go:1675: http://192.168.39.103:31420: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-r2l2t

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.103:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.103:31420
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (25.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [525898dc-0e8a-48e1-8fb8-259f59049300] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004408513s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-629421 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-629421 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-629421 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-629421 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-629421 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b4c89b9b-9d35-44b1-a2c2-033603cade70] Pending
helpers_test.go:344: "sp-pod" [b4c89b9b-9d35-44b1-a2c2-033603cade70] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b4c89b9b-9d35-44b1-a2c2-033603cade70] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.005457449s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-629421 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-629421 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-629421 delete -f testdata/storage-provisioner/pod.yaml: (1.354019411s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-629421 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f88952cf-3ad9-4ffa-a68e-234347124bf0] Pending
helpers_test.go:344: "sp-pod" [f88952cf-3ad9-4ffa-a68e-234347124bf0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f88952cf-3ad9-4ffa-a68e-234347124bf0] Running
E0815 23:20:35.022808   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
2024/08/15 23:20:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003905365s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-629421 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh -n functional-629421 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cp functional-629421:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1992429112/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh -n functional-629421 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh -n functional-629421 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-629421 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-ljwst" [ae6f5bf2-dff7-43bb-b9a1-86934a887bc9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-ljwst" [ae6f5bf2-dff7-43bb-b9a1-86934a887bc9] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.005350992s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-629421 exec mysql-6cdb49bbb-ljwst -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-629421 exec mysql-6cdb49bbb-ljwst -- mysql -ppassword -e "show databases;": exit status 1 (505.872314ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-629421 exec mysql-6cdb49bbb-ljwst -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-629421 exec mysql-6cdb49bbb-ljwst -- mysql -ppassword -e "show databases;": exit status 1 (141.42692ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-629421 exec mysql-6cdb49bbb-ljwst -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/20078/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo cat /etc/test/nested/copy/20078/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/20078.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo cat /etc/ssl/certs/20078.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/20078.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo cat /usr/share/ca-certificates/20078.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/200782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo cat /etc/ssl/certs/200782.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/200782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo cat /usr/share/ca-certificates/200782.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-629421 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 ssh "sudo systemctl is-active docker": exit status 1 (226.709116ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 ssh "sudo systemctl is-active containerd": exit status 1 (221.076823ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-629421 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-629421
localhost/kicbase/echo-server:functional-629421
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-629421 image ls --format short --alsologtostderr:
I0815 23:20:22.548048   29865 out.go:345] Setting OutFile to fd 1 ...
I0815 23:20:22.548329   29865 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:22.548345   29865 out.go:358] Setting ErrFile to fd 2...
I0815 23:20:22.548351   29865 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:22.548607   29865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
I0815 23:20:22.549401   29865 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:22.549554   29865 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:22.550032   29865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:22.550086   29865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:22.566696   29865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
I0815 23:20:22.567167   29865 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:22.567749   29865 main.go:141] libmachine: Using API Version  1
I0815 23:20:22.567775   29865 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:22.568143   29865 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:22.568358   29865 main.go:141] libmachine: (functional-629421) Calling .GetState
I0815 23:20:22.570279   29865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:22.570333   29865 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:22.585174   29865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
I0815 23:20:22.585603   29865 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:22.586083   29865 main.go:141] libmachine: Using API Version  1
I0815 23:20:22.586101   29865 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:22.586394   29865 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:22.586587   29865 main.go:141] libmachine: (functional-629421) Calling .DriverName
I0815 23:20:22.586803   29865 ssh_runner.go:195] Run: systemctl --version
I0815 23:20:22.586839   29865 main.go:141] libmachine: (functional-629421) Calling .GetSSHHostname
I0815 23:20:22.589962   29865 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:22.590375   29865 main.go:141] libmachine: (functional-629421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:07:2c", ip: ""} in network mk-functional-629421: {Iface:virbr1 ExpiryTime:2024-08-16 00:17:11 +0000 UTC Type:0 Mac:52:54:00:7a:07:2c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-629421 Clientid:01:52:54:00:7a:07:2c}
I0815 23:20:22.590410   29865 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined IP address 192.168.39.103 and MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:22.590557   29865 main.go:141] libmachine: (functional-629421) Calling .GetSSHPort
I0815 23:20:22.590743   29865 main.go:141] libmachine: (functional-629421) Calling .GetSSHKeyPath
I0815 23:20:22.590925   29865 main.go:141] libmachine: (functional-629421) Calling .GetSSHUsername
I0815 23:20:22.591074   29865 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/functional-629421/id_rsa Username:docker}
I0815 23:20:22.696341   29865 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 23:20:22.758082   29865 main.go:141] libmachine: Making call to close driver server
I0815 23:20:22.758099   29865 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:22.758425   29865 main.go:141] libmachine: (functional-629421) DBG | Closing plugin on server side
I0815 23:20:22.758411   29865 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:22.758451   29865 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 23:20:22.758460   29865 main.go:141] libmachine: Making call to close driver server
I0815 23:20:22.758467   29865 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:22.758699   29865 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:22.758722   29865 main.go:141] libmachine: (functional-629421) DBG | Closing plugin on server side
I0815 23:20:22.758755   29865 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-629421 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/minikube-local-cache-test     | functional-629421  | b63fc234e8809 | 3.33kB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-629421  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-629421  | 46111aa82cf6e | 1.47MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-629421 image ls --format table --alsologtostderr:
I0815 23:20:26.915261   30203 out.go:345] Setting OutFile to fd 1 ...
I0815 23:20:26.915549   30203 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:26.915561   30203 out.go:358] Setting ErrFile to fd 2...
I0815 23:20:26.915567   30203 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:26.915849   30203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
I0815 23:20:26.916642   30203 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:26.916806   30203 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:26.917361   30203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:26.917417   30203 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:26.932901   30203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
I0815 23:20:26.933337   30203 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:26.933901   30203 main.go:141] libmachine: Using API Version  1
I0815 23:20:26.933934   30203 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:26.934290   30203 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:26.934508   30203 main.go:141] libmachine: (functional-629421) Calling .GetState
I0815 23:20:26.936494   30203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:26.936544   30203 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:26.951369   30203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
I0815 23:20:26.951741   30203 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:26.952231   30203 main.go:141] libmachine: Using API Version  1
I0815 23:20:26.952258   30203 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:26.952585   30203 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:26.952805   30203 main.go:141] libmachine: (functional-629421) Calling .DriverName
I0815 23:20:26.953005   30203 ssh_runner.go:195] Run: systemctl --version
I0815 23:20:26.953039   30203 main.go:141] libmachine: (functional-629421) Calling .GetSSHHostname
I0815 23:20:26.955892   30203 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:26.956284   30203 main.go:141] libmachine: (functional-629421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:07:2c", ip: ""} in network mk-functional-629421: {Iface:virbr1 ExpiryTime:2024-08-16 00:17:11 +0000 UTC Type:0 Mac:52:54:00:7a:07:2c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-629421 Clientid:01:52:54:00:7a:07:2c}
I0815 23:20:26.956311   30203 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined IP address 192.168.39.103 and MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:26.956455   30203 main.go:141] libmachine: (functional-629421) Calling .GetSSHPort
I0815 23:20:26.956625   30203 main.go:141] libmachine: (functional-629421) Calling .GetSSHKeyPath
I0815 23:20:26.956757   30203 main.go:141] libmachine: (functional-629421) Calling .GetSSHUsername
I0815 23:20:26.956913   30203 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/functional-629421/id_rsa Username:docker}
I0815 23:20:27.067509   30203 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 23:20:27.166483   30203 main.go:141] libmachine: Making call to close driver server
I0815 23:20:27.166501   30203 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:27.166799   30203 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:27.166816   30203 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 23:20:27.166823   30203 main.go:141] libmachine: Making call to close driver server
I0815 23:20:27.166825   30203 main.go:141] libmachine: (functional-629421) DBG | Closing plugin on server side
I0815 23:20:27.166830   30203 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:27.167118   30203 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:27.167138   30203 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-629421 image ls --format json --alsologtostderr:
[{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"46111aa82cf6e3b5ada94b6ae0
12b14a570f02f285f1993720cf941760ac57e3","repoDigests":["localhost/my-image@sha256:07b3ab7cced8f154d8681e48add27aa8cfe609d8f402e07ac7ca45c1a34a1707"],"repoTags":["localhost/my-image:functional-629421"],"size":"1468600"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests"
:["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"
],"size":"92728217"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ece0049b014826915d93c84aa57fa73e249c6cc14acaab958376324bc07ec773","repoDigests":["docker.io/library/25edb7e5100bf450d2b89c884fadae54eed94913c9d3da0f658bd93c8594c47a-tmp@sha256:50792b1f221ae1cd0642734d67d2b262c903de981073f0476c5c5f85f8a7d0c1"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5ef79149e0ec84a7a9
f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDige
sts":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-629421"],"size":"4943877"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-m
inikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"b63fc234e8809b09a4e2bc7b27e79637561dce0f1b66c099e56bbfc686ad6e43","repoDigests":["localhost/minikube-local-cache-test@sha256:1b14cba72e910d58fcb7e5ab8cf0b4eb6c5910b2028481c426336a89d44bc7a8"],"repoTags":["localhost/minikube-local-cache-test:functional-629421"],"size":"3330"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.
3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-629421 image ls --format json --alsologtostderr:
I0815 23:20:26.478711   30118 out.go:345] Setting OutFile to fd 1 ...
I0815 23:20:26.478863   30118 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:26.478890   30118 out.go:358] Setting ErrFile to fd 2...
I0815 23:20:26.478907   30118 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:26.479081   30118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
I0815 23:20:26.479653   30118 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:26.479772   30118 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:26.480253   30118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:26.480338   30118 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:26.498923   30118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39465
I0815 23:20:26.499442   30118 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:26.500082   30118 main.go:141] libmachine: Using API Version  1
I0815 23:20:26.500103   30118 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:26.501522   30118 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:26.501770   30118 main.go:141] libmachine: (functional-629421) Calling .GetState
I0815 23:20:26.503888   30118 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:26.503944   30118 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:26.523193   30118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
I0815 23:20:26.523730   30118 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:26.524274   30118 main.go:141] libmachine: Using API Version  1
I0815 23:20:26.524296   30118 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:26.524695   30118 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:26.524851   30118 main.go:141] libmachine: (functional-629421) Calling .DriverName
I0815 23:20:26.525059   30118 ssh_runner.go:195] Run: systemctl --version
I0815 23:20:26.525100   30118 main.go:141] libmachine: (functional-629421) Calling .GetSSHHostname
I0815 23:20:26.528185   30118 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:26.528543   30118 main.go:141] libmachine: (functional-629421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:07:2c", ip: ""} in network mk-functional-629421: {Iface:virbr1 ExpiryTime:2024-08-16 00:17:11 +0000 UTC Type:0 Mac:52:54:00:7a:07:2c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-629421 Clientid:01:52:54:00:7a:07:2c}
I0815 23:20:26.528639   30118 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined IP address 192.168.39.103 and MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:26.528843   30118 main.go:141] libmachine: (functional-629421) Calling .GetSSHPort
I0815 23:20:26.528985   30118 main.go:141] libmachine: (functional-629421) Calling .GetSSHKeyPath
I0815 23:20:26.529133   30118 main.go:141] libmachine: (functional-629421) Calling .GetSSHUsername
I0815 23:20:26.529264   30118 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/functional-629421/id_rsa Username:docker}
I0815 23:20:26.770471   30118 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 23:20:26.864251   30118 main.go:141] libmachine: Making call to close driver server
I0815 23:20:26.864268   30118 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:26.864568   30118 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:26.864590   30118 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 23:20:26.864608   30118 main.go:141] libmachine: Making call to close driver server
I0815 23:20:26.864620   30118 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:26.864897   30118 main.go:141] libmachine: (functional-629421) DBG | Closing plugin on server side
I0815 23:20:26.864963   30118 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:26.864974   30118 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-629421 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: b63fc234e8809b09a4e2bc7b27e79637561dce0f1b66c099e56bbfc686ad6e43
repoDigests:
- localhost/minikube-local-cache-test@sha256:1b14cba72e910d58fcb7e5ab8cf0b4eb6c5910b2028481c426336a89d44bc7a8
repoTags:
- localhost/minikube-local-cache-test:functional-629421
size: "3330"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-629421
size: "4943877"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-629421 image ls --format yaml --alsologtostderr:
I0815 23:20:22.804855   29889 out.go:345] Setting OutFile to fd 1 ...
I0815 23:20:22.804986   29889 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:22.805005   29889 out.go:358] Setting ErrFile to fd 2...
I0815 23:20:22.805011   29889 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:22.805209   29889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
I0815 23:20:22.805725   29889 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:22.805821   29889 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:22.806233   29889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:22.806273   29889 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:22.820875   29889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
I0815 23:20:22.821317   29889 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:22.821924   29889 main.go:141] libmachine: Using API Version  1
I0815 23:20:22.821964   29889 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:22.822274   29889 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:22.822525   29889 main.go:141] libmachine: (functional-629421) Calling .GetState
I0815 23:20:22.824386   29889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:22.824430   29889 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:22.843947   29889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
I0815 23:20:22.844454   29889 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:22.844954   29889 main.go:141] libmachine: Using API Version  1
I0815 23:20:22.844976   29889 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:22.845296   29889 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:22.845463   29889 main.go:141] libmachine: (functional-629421) Calling .DriverName
I0815 23:20:22.845638   29889 ssh_runner.go:195] Run: systemctl --version
I0815 23:20:22.845665   29889 main.go:141] libmachine: (functional-629421) Calling .GetSSHHostname
I0815 23:20:22.848454   29889 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:22.848850   29889 main.go:141] libmachine: (functional-629421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:07:2c", ip: ""} in network mk-functional-629421: {Iface:virbr1 ExpiryTime:2024-08-16 00:17:11 +0000 UTC Type:0 Mac:52:54:00:7a:07:2c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-629421 Clientid:01:52:54:00:7a:07:2c}
I0815 23:20:22.848877   29889 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined IP address 192.168.39.103 and MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:22.848982   29889 main.go:141] libmachine: (functional-629421) Calling .GetSSHPort
I0815 23:20:22.849146   29889 main.go:141] libmachine: (functional-629421) Calling .GetSSHKeyPath
I0815 23:20:22.849286   29889 main.go:141] libmachine: (functional-629421) Calling .GetSSHUsername
I0815 23:20:22.849435   29889 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/functional-629421/id_rsa Username:docker}
I0815 23:20:22.939737   29889 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 23:20:23.010657   29889 main.go:141] libmachine: Making call to close driver server
I0815 23:20:23.010675   29889 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:23.010946   29889 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:23.010966   29889 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 23:20:23.010973   29889 main.go:141] libmachine: (functional-629421) DBG | Closing plugin on server side
I0815 23:20:23.010985   29889 main.go:141] libmachine: Making call to close driver server
I0815 23:20:23.010995   29889 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:23.011218   29889 main.go:141] libmachine: (functional-629421) DBG | Closing plugin on server side
I0815 23:20:23.011240   29889 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:23.011261   29889 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 ssh pgrep buildkitd: exit status 1 (198.759845ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image build -t localhost/my-image:functional-629421 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 image build -t localhost/my-image:functional-629421 testdata/build --alsologtostderr: (2.957700473s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-629421 image build -t localhost/my-image:functional-629421 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ece0049b014
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-629421
--> 46111aa82cf
Successfully tagged localhost/my-image:functional-629421
46111aa82cf6e3b5ada94b6ae012b14a570f02f285f1993720cf941760ac57e3
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-629421 image build -t localhost/my-image:functional-629421 testdata/build --alsologtostderr:
I0815 23:20:23.256764   29941 out.go:345] Setting OutFile to fd 1 ...
I0815 23:20:23.257103   29941 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:23.257117   29941 out.go:358] Setting ErrFile to fd 2...
I0815 23:20:23.257124   29941 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 23:20:23.257406   29941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
I0815 23:20:23.258255   29941 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:23.258949   29941 config.go:182] Loaded profile config "functional-629421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 23:20:23.259504   29941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:23.259552   29941 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:23.274500   29941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
I0815 23:20:23.274951   29941 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:23.275477   29941 main.go:141] libmachine: Using API Version  1
I0815 23:20:23.275495   29941 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:23.275852   29941 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:23.276032   29941 main.go:141] libmachine: (functional-629421) Calling .GetState
I0815 23:20:23.278178   29941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 23:20:23.278232   29941 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 23:20:23.293404   29941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
I0815 23:20:23.293911   29941 main.go:141] libmachine: () Calling .GetVersion
I0815 23:20:23.294343   29941 main.go:141] libmachine: Using API Version  1
I0815 23:20:23.294367   29941 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 23:20:23.294735   29941 main.go:141] libmachine: () Calling .GetMachineName
I0815 23:20:23.294968   29941 main.go:141] libmachine: (functional-629421) Calling .DriverName
I0815 23:20:23.295174   29941 ssh_runner.go:195] Run: systemctl --version
I0815 23:20:23.295197   29941 main.go:141] libmachine: (functional-629421) Calling .GetSSHHostname
I0815 23:20:23.298378   29941 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:23.298714   29941 main.go:141] libmachine: (functional-629421) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:07:2c", ip: ""} in network mk-functional-629421: {Iface:virbr1 ExpiryTime:2024-08-16 00:17:11 +0000 UTC Type:0 Mac:52:54:00:7a:07:2c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:functional-629421 Clientid:01:52:54:00:7a:07:2c}
I0815 23:20:23.298754   29941 main.go:141] libmachine: (functional-629421) DBG | domain functional-629421 has defined IP address 192.168.39.103 and MAC address 52:54:00:7a:07:2c in network mk-functional-629421
I0815 23:20:23.298908   29941 main.go:141] libmachine: (functional-629421) Calling .GetSSHPort
I0815 23:20:23.299074   29941 main.go:141] libmachine: (functional-629421) Calling .GetSSHKeyPath
I0815 23:20:23.299260   29941 main.go:141] libmachine: (functional-629421) Calling .GetSSHUsername
I0815 23:20:23.299445   29941 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/functional-629421/id_rsa Username:docker}
I0815 23:20:23.388852   29941 build_images.go:161] Building image from path: /tmp/build.753453594.tar
I0815 23:20:23.388908   29941 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 23:20:23.401548   29941 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.753453594.tar
I0815 23:20:23.410861   29941 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.753453594.tar: stat -c "%s %y" /var/lib/minikube/build/build.753453594.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.753453594.tar': No such file or directory
I0815 23:20:23.410892   29941 ssh_runner.go:362] scp /tmp/build.753453594.tar --> /var/lib/minikube/build/build.753453594.tar (3072 bytes)
I0815 23:20:23.446053   29941 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.753453594
I0815 23:20:23.458025   29941 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.753453594 -xf /var/lib/minikube/build/build.753453594.tar
I0815 23:20:23.491949   29941 crio.go:315] Building image: /var/lib/minikube/build/build.753453594
I0815 23:20:23.492047   29941 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-629421 /var/lib/minikube/build/build.753453594 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0815 23:20:26.116107   29941 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-629421 /var/lib/minikube/build/build.753453594 --cgroup-manager=cgroupfs: (2.624028954s)
I0815 23:20:26.116172   29941 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.753453594
I0815 23:20:26.143122   29941 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.753453594.tar
I0815 23:20:26.166675   29941 build_images.go:217] Built localhost/my-image:functional-629421 from /tmp/build.753453594.tar
I0815 23:20:26.166708   29941 build_images.go:133] succeeded building to: functional-629421
I0815 23:20:26.166714   29941 build_images.go:134] failed building to: 
I0815 23:20:26.166741   29941 main.go:141] libmachine: Making call to close driver server
I0815 23:20:26.166756   29941 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:26.167052   29941 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:26.167068   29941 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 23:20:26.167089   29941 main.go:141] libmachine: Making call to close driver server
I0815 23:20:26.167098   29941 main.go:141] libmachine: (functional-629421) Calling .Close
I0815 23:20:26.167322   29941 main.go:141] libmachine: (functional-629421) DBG | Closing plugin on server side
I0815 23:20:26.167374   29941 main.go:141] libmachine: Successfully made call to close driver server
I0815 23:20:26.167386   29941 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-629421
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image load --daemon kicbase/echo-server:functional-629421 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 image load --daemon kicbase/echo-server:functional-629421 --alsologtostderr: (1.34402985s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "239.050769ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.745822ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "256.29064ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "54.126994ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image load --daemon kicbase/echo-server:functional-629421 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-629421
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image load --daemon kicbase/echo-server:functional-629421 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image save kicbase/echo-server:functional-629421 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-629421 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.804680883s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-629421
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 image save --daemon kicbase/echo-server:functional-629421 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-629421
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-629421 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-629421 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-ck9mk" [7eba367a-6166-4cca-bf83-4b4955733b10] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-ck9mk" [7eba367a-6166-4cca-bf83-4b4955733b10] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003914337s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdany-port2843074819/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723764018789268622" to /tmp/TestFunctionalparallelMountCmdany-port2843074819/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723764018789268622" to /tmp/TestFunctionalparallelMountCmdany-port2843074819/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723764018789268622" to /tmp/TestFunctionalparallelMountCmdany-port2843074819/001/test-1723764018789268622
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (187.019444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 23:20 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 23:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 23:20 test-1723764018789268622
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh cat /mount-9p/test-1723764018789268622
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-629421 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [74dd7608-d2cc-4ada-9940-9c3cdaf5002e] Pending
helpers_test.go:344: "busybox-mount" [74dd7608-d2cc-4ada-9940-9c3cdaf5002e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [74dd7608-d2cc-4ada-9940-9c3cdaf5002e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [74dd7608-d2cc-4ada-9940-9c3cdaf5002e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004178612s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-629421 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdany-port2843074819/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 service list -o json
functional_test.go:1494: Took "426.277983ms" to run "out/minikube-linux-amd64 -p functional-629421 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.103:32140
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.103:32140
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdspecific-port2778044196/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (332.303681ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdspecific-port2778044196/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 ssh "sudo umount -f /mount-9p": exit status 1 (224.043586ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-629421 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdspecific-port2778044196/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163595251/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163595251/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163595251/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T" /mount1: exit status 1 (268.359968ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-629421 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-629421 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163595251/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163595251/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-629421 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3163595251/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-629421
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-629421
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-629421
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-175414 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 23:22:51.159581   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:23:18.867235   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-175414 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m11.77875558s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (192.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-175414 -- rollout status deployment/busybox: (3.865746216s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-glqlv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-kt8v4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-ztvms -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-glqlv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-kt8v4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-ztvms -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-glqlv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-kt8v4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-ztvms -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-glqlv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-glqlv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-kt8v4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-kt8v4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-ztvms -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-175414 -- exec busybox-7dff88458-ztvms -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-175414 -v=7 --alsologtostderr
E0815 23:24:53.799150   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:24:53.805557   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:24:53.816955   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:24:53.839045   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:24:53.880850   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:24:53.962286   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-175414 -v=7 --alsologtostderr: (55.279279527s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
E0815 23:24:54.123552   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:24:54.445090   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-175414 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0815 23:24:55.087475   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp testdata/cp-test.txt ha-175414:/home/docker/cp-test.txt
E0815 23:24:56.369379   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414:/home/docker/cp-test.txt ha-175414-m02:/home/docker/cp-test_ha-175414_ha-175414-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m02 "sudo cat /home/docker/cp-test_ha-175414_ha-175414-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414:/home/docker/cp-test.txt ha-175414-m03:/home/docker/cp-test_ha-175414_ha-175414-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m03 "sudo cat /home/docker/cp-test_ha-175414_ha-175414-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414:/home/docker/cp-test.txt ha-175414-m04:/home/docker/cp-test_ha-175414_ha-175414-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414 "sudo cat /home/docker/cp-test.txt"
E0815 23:24:58.931416   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m04 "sudo cat /home/docker/cp-test_ha-175414_ha-175414-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp testdata/cp-test.txt ha-175414-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m02:/home/docker/cp-test.txt ha-175414:/home/docker/cp-test_ha-175414-m02_ha-175414.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414 "sudo cat /home/docker/cp-test_ha-175414-m02_ha-175414.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m02:/home/docker/cp-test.txt ha-175414-m03:/home/docker/cp-test_ha-175414-m02_ha-175414-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m03 "sudo cat /home/docker/cp-test_ha-175414-m02_ha-175414-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m02:/home/docker/cp-test.txt ha-175414-m04:/home/docker/cp-test_ha-175414-m02_ha-175414-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m04 "sudo cat /home/docker/cp-test_ha-175414-m02_ha-175414-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp testdata/cp-test.txt ha-175414-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt ha-175414:/home/docker/cp-test_ha-175414-m03_ha-175414.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414 "sudo cat /home/docker/cp-test_ha-175414-m03_ha-175414.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt ha-175414-m02:/home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m03 "sudo cat /home/docker/cp-test.txt"
E0815 23:25:04.052697   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m02 "sudo cat /home/docker/cp-test_ha-175414-m03_ha-175414-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m03:/home/docker/cp-test.txt ha-175414-m04:/home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m04 "sudo cat /home/docker/cp-test_ha-175414-m03_ha-175414-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp testdata/cp-test.txt ha-175414-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile430320474/001/cp-test_ha-175414-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt ha-175414:/home/docker/cp-test_ha-175414-m04_ha-175414.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414 "sudo cat /home/docker/cp-test_ha-175414-m04_ha-175414.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt ha-175414-m02:/home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m02 "sudo cat /home/docker/cp-test_ha-175414-m04_ha-175414-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 cp ha-175414-m04:/home/docker/cp-test.txt ha-175414-m03:/home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 ssh -n ha-175414-m03 "sudo cat /home/docker/cp-test_ha-175414-m04_ha-175414-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.46199015s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 node delete m03 -v=7 --alsologtostderr
E0815 23:34:53.799781   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-175414 node delete m03 -v=7 --alsologtostderr: (15.644713171s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (487.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-175414 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 23:37:51.160914   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:39:53.801006   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:41:16.863130   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:42:51.159625   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0815 23:44:53.799972   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-175414 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (8m7.096309046s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (487.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-175414 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-175414 --control-plane -v=7 --alsologtostderr: (1m11.965371466s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-175414 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-964548 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0815 23:47:51.160644   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-964548 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.711123068s)
--- PASS: TestJSONOutput/start/Command (84.71s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-964548 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-964548 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-964548 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-964548 --output=json --user=testUser: (7.327847099s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-824349 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-824349 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.53869ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43e14dea-9b71-424c-8399-3af20bd408fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-824349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1294a3a3-47ad-4b9f-9843-b36b1ef43d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19452"}}
	{"specversion":"1.0","id":"2f14669e-3ac5-4dca-ad98-34080fde2957","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"597ae68e-f413-448f-b8c2-6966c38c05bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig"}}
	{"specversion":"1.0","id":"604b9d36-8262-464b-8dc1-28f809aec406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube"}}
	{"specversion":"1.0","id":"96a8bc4a-8376-4d08-99df-6c3724ade834","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d65f3c5b-3416-47c2-be8d-998afa0bac57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e6ecec6c-420f-4c26-acc9-f5336e040264","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-824349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-824349
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-392017 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-392017 --driver=kvm2  --container-runtime=crio: (42.717889156s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-394655 --driver=kvm2  --container-runtime=crio
E0815 23:49:53.800903   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-394655 --driver=kvm2  --container-runtime=crio: (43.446472604s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-392017
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-394655
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-394655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-394655
helpers_test.go:175: Cleaning up "first-392017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-392017
--- PASS: TestMinikubeProfile (88.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-783961 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-783961 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.158339065s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-783961 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-783961 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-801606 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0815 23:50:54.230903   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-801606 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.896835701s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801606 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801606 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.04s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-783961 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-783961 --alsologtostderr -v=5: (1.043800932s)
--- PASS: TestMountStart/serial/DeleteFirst (1.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801606 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801606 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-801606
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-801606: (1.275242889s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-801606
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-801606: (19.385914383s)
--- PASS: TestMountStart/serial/RestartStopped (20.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801606 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-801606 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-145108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 23:52:51.160721   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-145108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.507639687s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-145108 -- rollout status deployment/busybox: (3.419141089s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-7rpbh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-h45mw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-7rpbh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-h45mw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-7rpbh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-h45mw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-7rpbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-7rpbh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-h45mw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-145108 -- exec busybox-7dff88458-h45mw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-145108 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-145108 -v 3 --alsologtostderr: (50.074208586s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-145108 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp testdata/cp-test.txt multinode-145108:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1410064125/001/cp-test_multinode-145108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108:/home/docker/cp-test.txt multinode-145108-m02:/home/docker/cp-test_multinode-145108_multinode-145108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m02 "sudo cat /home/docker/cp-test_multinode-145108_multinode-145108-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108:/home/docker/cp-test.txt multinode-145108-m03:/home/docker/cp-test_multinode-145108_multinode-145108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m03 "sudo cat /home/docker/cp-test_multinode-145108_multinode-145108-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp testdata/cp-test.txt multinode-145108-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1410064125/001/cp-test_multinode-145108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108-m02:/home/docker/cp-test.txt multinode-145108:/home/docker/cp-test_multinode-145108-m02_multinode-145108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108 "sudo cat /home/docker/cp-test_multinode-145108-m02_multinode-145108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108-m02:/home/docker/cp-test.txt multinode-145108-m03:/home/docker/cp-test_multinode-145108-m02_multinode-145108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m03 "sudo cat /home/docker/cp-test_multinode-145108-m02_multinode-145108-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp testdata/cp-test.txt multinode-145108-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1410064125/001/cp-test_multinode-145108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt multinode-145108:/home/docker/cp-test_multinode-145108-m03_multinode-145108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108 "sudo cat /home/docker/cp-test_multinode-145108-m03_multinode-145108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 cp multinode-145108-m03:/home/docker/cp-test.txt multinode-145108-m02:/home/docker/cp-test_multinode-145108-m03_multinode-145108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 ssh -n multinode-145108-m02 "sudo cat /home/docker/cp-test_multinode-145108-m03_multinode-145108-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-145108 node stop m03: (1.478605219s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-145108 status: exit status 7 (414.176762ms)

                                                
                                                
-- stdout --
	multinode-145108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-145108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-145108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-145108 status --alsologtostderr: exit status 7 (413.330601ms)

                                                
                                                
-- stdout --
	multinode-145108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-145108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-145108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 23:54:22.382696   48249 out.go:345] Setting OutFile to fd 1 ...
	I0815 23:54:22.382821   48249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:54:22.382831   48249 out.go:358] Setting ErrFile to fd 2...
	I0815 23:54:22.382838   48249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 23:54:22.383017   48249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0815 23:54:22.383193   48249 out.go:352] Setting JSON to false
	I0815 23:54:22.383222   48249 mustload.go:65] Loading cluster: multinode-145108
	I0815 23:54:22.383331   48249 notify.go:220] Checking for updates...
	I0815 23:54:22.383627   48249 config.go:182] Loaded profile config "multinode-145108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 23:54:22.383643   48249 status.go:255] checking status of multinode-145108 ...
	I0815 23:54:22.384039   48249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:54:22.384107   48249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:54:22.402763   48249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34415
	I0815 23:54:22.403184   48249 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:54:22.403737   48249 main.go:141] libmachine: Using API Version  1
	I0815 23:54:22.403758   48249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:54:22.404136   48249 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:54:22.404311   48249 main.go:141] libmachine: (multinode-145108) Calling .GetState
	I0815 23:54:22.405973   48249 status.go:330] multinode-145108 host status = "Running" (err=<nil>)
	I0815 23:54:22.405994   48249 host.go:66] Checking if "multinode-145108" exists ...
	I0815 23:54:22.406300   48249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:54:22.406334   48249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:54:22.421024   48249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0815 23:54:22.421369   48249 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:54:22.421772   48249 main.go:141] libmachine: Using API Version  1
	I0815 23:54:22.421793   48249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:54:22.422083   48249 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:54:22.422247   48249 main.go:141] libmachine: (multinode-145108) Calling .GetIP
	I0815 23:54:22.424781   48249 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:54:22.425165   48249 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:54:22.425194   48249 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:54:22.425251   48249 host.go:66] Checking if "multinode-145108" exists ...
	I0815 23:54:22.425542   48249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:54:22.425574   48249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:54:22.440252   48249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0815 23:54:22.440653   48249 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:54:22.441146   48249 main.go:141] libmachine: Using API Version  1
	I0815 23:54:22.441181   48249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:54:22.441439   48249 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:54:22.441614   48249 main.go:141] libmachine: (multinode-145108) Calling .DriverName
	I0815 23:54:22.441782   48249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:54:22.441807   48249 main.go:141] libmachine: (multinode-145108) Calling .GetSSHHostname
	I0815 23:54:22.444465   48249 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:54:22.444863   48249 main.go:141] libmachine: (multinode-145108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:52:b5", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:51:39 +0000 UTC Type:0 Mac:52:54:00:a6:52:b5 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-145108 Clientid:01:52:54:00:a6:52:b5}
	I0815 23:54:22.444886   48249 main.go:141] libmachine: (multinode-145108) DBG | domain multinode-145108 has defined IP address 192.168.39.117 and MAC address 52:54:00:a6:52:b5 in network mk-multinode-145108
	I0815 23:54:22.445033   48249 main.go:141] libmachine: (multinode-145108) Calling .GetSSHPort
	I0815 23:54:22.445176   48249 main.go:141] libmachine: (multinode-145108) Calling .GetSSHKeyPath
	I0815 23:54:22.445299   48249 main.go:141] libmachine: (multinode-145108) Calling .GetSSHUsername
	I0815 23:54:22.445445   48249 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108/id_rsa Username:docker}
	I0815 23:54:22.521763   48249 ssh_runner.go:195] Run: systemctl --version
	I0815 23:54:22.527837   48249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:54:22.541747   48249 kubeconfig.go:125] found "multinode-145108" server: "https://192.168.39.117:8443"
	I0815 23:54:22.541795   48249 api_server.go:166] Checking apiserver status ...
	I0815 23:54:22.541834   48249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 23:54:22.555328   48249 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1078/cgroup
	W0815 23:54:22.564296   48249 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1078/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 23:54:22.564362   48249 ssh_runner.go:195] Run: ls
	I0815 23:54:22.568683   48249 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0815 23:54:22.572531   48249 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I0815 23:54:22.572549   48249 status.go:422] multinode-145108 apiserver status = Running (err=<nil>)
	I0815 23:54:22.572558   48249 status.go:257] multinode-145108 status: &{Name:multinode-145108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:54:22.572579   48249 status.go:255] checking status of multinode-145108-m02 ...
	I0815 23:54:22.572867   48249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:54:22.572901   48249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:54:22.588694   48249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41571
	I0815 23:54:22.589101   48249 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:54:22.589554   48249 main.go:141] libmachine: Using API Version  1
	I0815 23:54:22.589584   48249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:54:22.589896   48249 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:54:22.590105   48249 main.go:141] libmachine: (multinode-145108-m02) Calling .GetState
	I0815 23:54:22.591611   48249 status.go:330] multinode-145108-m02 host status = "Running" (err=<nil>)
	I0815 23:54:22.591638   48249 host.go:66] Checking if "multinode-145108-m02" exists ...
	I0815 23:54:22.591934   48249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:54:22.591979   48249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:54:22.607040   48249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I0815 23:54:22.607452   48249 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:54:22.607889   48249 main.go:141] libmachine: Using API Version  1
	I0815 23:54:22.607908   48249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:54:22.608186   48249 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:54:22.608363   48249 main.go:141] libmachine: (multinode-145108-m02) Calling .GetIP
	I0815 23:54:22.610884   48249 main.go:141] libmachine: (multinode-145108-m02) DBG | domain multinode-145108-m02 has defined MAC address 52:54:00:53:5b:90 in network mk-multinode-145108
	I0815 23:54:22.611247   48249 main.go:141] libmachine: (multinode-145108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:90", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:52:43 +0000 UTC Type:0 Mac:52:54:00:53:5b:90 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:multinode-145108-m02 Clientid:01:52:54:00:53:5b:90}
	I0815 23:54:22.611276   48249 main.go:141] libmachine: (multinode-145108-m02) DBG | domain multinode-145108-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:53:5b:90 in network mk-multinode-145108
	I0815 23:54:22.611398   48249 host.go:66] Checking if "multinode-145108-m02" exists ...
	I0815 23:54:22.611706   48249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:54:22.611744   48249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:54:22.626551   48249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0815 23:54:22.626936   48249 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:54:22.627327   48249 main.go:141] libmachine: Using API Version  1
	I0815 23:54:22.627347   48249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:54:22.627619   48249 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:54:22.627826   48249 main.go:141] libmachine: (multinode-145108-m02) Calling .DriverName
	I0815 23:54:22.628010   48249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 23:54:22.628030   48249 main.go:141] libmachine: (multinode-145108-m02) Calling .GetSSHHostname
	I0815 23:54:22.630569   48249 main.go:141] libmachine: (multinode-145108-m02) DBG | domain multinode-145108-m02 has defined MAC address 52:54:00:53:5b:90 in network mk-multinode-145108
	I0815 23:54:22.630932   48249 main.go:141] libmachine: (multinode-145108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:5b:90", ip: ""} in network mk-multinode-145108: {Iface:virbr1 ExpiryTime:2024-08-16 00:52:43 +0000 UTC Type:0 Mac:52:54:00:53:5b:90 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:multinode-145108-m02 Clientid:01:52:54:00:53:5b:90}
	I0815 23:54:22.630958   48249 main.go:141] libmachine: (multinode-145108-m02) DBG | domain multinode-145108-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:53:5b:90 in network mk-multinode-145108
	I0815 23:54:22.631079   48249 main.go:141] libmachine: (multinode-145108-m02) Calling .GetSSHPort
	I0815 23:54:22.631210   48249 main.go:141] libmachine: (multinode-145108-m02) Calling .GetSSHKeyPath
	I0815 23:54:22.631362   48249 main.go:141] libmachine: (multinode-145108-m02) Calling .GetSSHUsername
	I0815 23:54:22.631485   48249 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19452-12919/.minikube/machines/multinode-145108-m02/id_rsa Username:docker}
	I0815 23:54:22.721462   48249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 23:54:22.736427   48249 status.go:257] multinode-145108-m02 status: &{Name:multinode-145108-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 23:54:22.736465   48249 status.go:255] checking status of multinode-145108-m03 ...
	I0815 23:54:22.736844   48249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 23:54:22.736879   48249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 23:54:22.751793   48249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0815 23:54:22.752229   48249 main.go:141] libmachine: () Calling .GetVersion
	I0815 23:54:22.752702   48249 main.go:141] libmachine: Using API Version  1
	I0815 23:54:22.752725   48249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 23:54:22.753107   48249 main.go:141] libmachine: () Calling .GetMachineName
	I0815 23:54:22.753288   48249 main.go:141] libmachine: (multinode-145108-m03) Calling .GetState
	I0815 23:54:22.754712   48249 status.go:330] multinode-145108-m03 host status = "Stopped" (err=<nil>)
	I0815 23:54:22.754737   48249 status.go:343] host is not running, skipping remaining checks
	I0815 23:54:22.754744   48249 status.go:257] multinode-145108-m03 status: &{Name:multinode-145108-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 node start m03 -v=7 --alsologtostderr
E0815 23:54:53.800100   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-145108 node start m03 -v=7 --alsologtostderr: (37.086300506s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-145108 node delete m03: (1.824261327s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (195.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-145108 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0816 00:02:51.160000   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:04:53.801235   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-145108 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.699268653s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-145108 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (195.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-145108
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-145108-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-145108-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.188562ms)

                                                
                                                
-- stdout --
	* [multinode-145108-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-145108-m02' is duplicated with machine name 'multinode-145108-m02' in profile 'multinode-145108'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-145108-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-145108-m03 --driver=kvm2  --container-runtime=crio: (42.125160631s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-145108
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-145108: exit status 80 (200.8931ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-145108 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-145108-m03 already exists in multinode-145108-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-145108-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.42s)

                                                
                                    
x
+
TestScheduledStopUnix (118.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-879975 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-879975 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.533054579s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-879975 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-879975 -n scheduled-stop-879975
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-879975 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-879975 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-879975 -n scheduled-stop-879975
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-879975
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-879975 --schedule 15s
E0816 00:12:51.161019   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-879975
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-879975: exit status 7 (60.759462ms)

                                                
                                                
-- stdout --
	scheduled-stop-879975
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-879975 -n scheduled-stop-879975
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-879975 -n scheduled-stop-879975: exit status 7 (64.234868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-879975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-879975
--- PASS: TestScheduledStopUnix (118.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (201.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.605619571 start -p running-upgrade-986094 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.605619571 start -p running-upgrade-986094 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m54.86074073s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-986094 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-986094 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.250331814s)
helpers_test.go:175: Cleaning up "running-upgrade-986094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-986094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-986094: (1.927804292s)
--- PASS: TestRunningBinaryUpgrade (201.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153553 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-153553 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.77983ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-153553] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153553 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-153553 --driver=kvm2  --container-runtime=crio: (1m32.622009919s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-153553 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-697641 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-697641 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (100.695412ms)

                                                
                                                
-- stdout --
	* [false-697641] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19452
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 00:13:27.050297   56049 out.go:345] Setting OutFile to fd 1 ...
	I0816 00:13:27.050539   56049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:13:27.050549   56049 out.go:358] Setting ErrFile to fd 2...
	I0816 00:13:27.050553   56049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 00:13:27.050745   56049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19452-12919/.minikube/bin
	I0816 00:13:27.051288   56049 out.go:352] Setting JSON to false
	I0816 00:13:27.052149   56049 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6907,"bootTime":1723760300,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0816 00:13:27.052207   56049 start.go:139] virtualization: kvm guest
	I0816 00:13:27.054134   56049 out.go:177] * [false-697641] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0816 00:13:27.055505   56049 out.go:177]   - MINIKUBE_LOCATION=19452
	I0816 00:13:27.055575   56049 notify.go:220] Checking for updates...
	I0816 00:13:27.057942   56049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 00:13:27.059133   56049 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19452-12919/kubeconfig
	I0816 00:13:27.060634   56049 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19452-12919/.minikube
	I0816 00:13:27.061975   56049 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 00:13:27.063301   56049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 00:13:27.064943   56049 config.go:182] Loaded profile config "NoKubernetes-153553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:13:27.065051   56049 config.go:182] Loaded profile config "force-systemd-env-222534": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:13:27.065154   56049 config.go:182] Loaded profile config "offline-crio-116258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0816 00:13:27.065254   56049 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 00:13:27.103533   56049 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 00:13:27.104737   56049 start.go:297] selected driver: kvm2
	I0816 00:13:27.104752   56049 start.go:901] validating driver "kvm2" against <nil>
	I0816 00:13:27.104764   56049 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 00:13:27.106614   56049 out.go:201] 
	W0816 00:13:27.107739   56049 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0816 00:13:27.108875   56049 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-697641 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-697641" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-697641

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-697641"

                                                
                                                
----------------------- debugLogs end: false-697641 [took: 2.55757314s] --------------------------------
helpers_test.go:175: Cleaning up "false-697641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-697641
--- PASS: TestNetworkPlugins/group/false (2.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153553 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-153553 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.411401084s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-153553 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-153553 status -o json: exit status 2 (241.819307ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-153553","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-153553
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-153553: (1.00762401s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (108.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3418596430 start -p stopped-upgrade-329005 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3418596430 start -p stopped-upgrade-329005 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m1.427939985s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3418596430 -p stopped-upgrade-329005 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3418596430 -p stopped-upgrade-329005 stop: (2.127237s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-329005 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-329005 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.724614283s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (108.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153553 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-153553 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.016695172s)
--- PASS: TestNoKubernetes/serial/Start (45.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-153553 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-153553 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.2207ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.910104284s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.083179632s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-153553
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-153553: (2.382669326s)
--- PASS: TestNoKubernetes/serial/Stop (2.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153553 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-153553 --driver=kvm2  --container-runtime=crio: (23.160876016s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-329005
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-153553 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-153553 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.064548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Start (91.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-937923 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-937923 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m31.4500357s)
--- PASS: TestPause/serial/Start (91.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (96.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m36.545800987s)
--- PASS: TestNetworkPlugins/group/auto/Start (96.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-937923 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-937923 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.754193172s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-697641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-697641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h4sns" [8b761a9c-9344-4005-b78e-b9f57f5f811c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h4sns" [8b761a9c-9344-4005-b78e-b9f57f5f811c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004593873s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m3.383054306s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.38s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-937923 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-697641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-937923 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-937923 --output=json --layout=cluster: exit status 2 (271.78399ms)

                                                
                                                
-- stdout --
	{"Name":"pause-937923","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-937923","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-937923 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-937923 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-937923 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-937923 --alsologtostderr -v=5: (1.025212496s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (89.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m29.848375865s)
--- PASS: TestNetworkPlugins/group/calico/Start (89.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m36.532810169s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5b7w5" [936d6397-e8ea-4fa9-b8d1-9f385312cac0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003671698s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-697641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-697641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qm9ph" [7270a5c2-2226-4b03-9a34-9e4e4b3dceb0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qm9ph" [7270a5c2-2226-4b03-9a34-9e4e4b3dceb0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004745521s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-697641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m30.647818766s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ngw54" [8063c27a-71c9-42ad-a6f7-4577f2af3676] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004658405s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-697641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-697641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pc8pp" [813519a7-c8d6-46c7-97ac-52f04f4e55cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pc8pp" [813519a7-c8d6-46c7-97ac-52f04f4e55cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004483146s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m22.826874685s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-697641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-697641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-697641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ndtvk" [fc08fa8a-fed3-40ac-9534-61bf4844c75f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ndtvk" [fc08fa8a-fed3-40ac-9534-61bf4844c75f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004991601s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-697641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0816 00:22:51.160218   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-697641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m35.652449399s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-697641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-697641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hz6lk" [37f398c7-29d7-4023-a2b2-f7edb6a12fbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hz6lk" [37f398c7-29d7-4023-a2b2-f7edb6a12fbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004616754s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gdz5p" [7411e5ca-81c9-4bdd-abe9-0b83a89e5eb4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005577358s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-697641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-697641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-697641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f7nfq" [6a874f70-ab2d-44c5-98d9-c3a520f4ce73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f7nfq" [6a874f70-ab2d-44c5-98d9-c3a520f4ce73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004839904s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-697641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-819398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 00:24:14.234322   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/addons-517040/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-819398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m42.639405987s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-758469 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-758469 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m41.31940219s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-697641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-697641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fcz6b" [25df1147-9e1d-4178-b618-8921193a1a85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fcz6b" [25df1147-9e1d-4178-b618-8921193a1a85] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004837724s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-697641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-697641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-616827 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 00:24:53.799660   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/functional-629421/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:25.212656   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:25.219044   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:25.230509   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:25.251913   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:25.293305   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:25.375117   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:25.537075   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:25.858624   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:26.500091   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:27.782129   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:30.344183   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:35.465782   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:25:45.708010   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-616827 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m9.538795498s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-819398 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af2d3601-0b7c-4683-a499-e5039d17d76d] Pending
helpers_test.go:344: "busybox" [af2d3601-0b7c-4683-a499-e5039d17d76d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af2d3601-0b7c-4683-a499-e5039d17d76d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004742436s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-819398 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-819398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-819398 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-616827 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [44031c7f-e317-4703-aab3-50572aae00c2] Pending
helpers_test.go:344: "busybox" [44031c7f-e317-4703-aab3-50572aae00c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [44031c7f-e317-4703-aab3-50572aae00c2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00529624s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-616827 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-758469 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1eb1c3b9-67a8-462a-a1f7-df1af9e610cc] Pending
helpers_test.go:344: "busybox" [1eb1c3b9-67a8-462a-a1f7-df1af9e610cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1eb1c3b9-67a8-462a-a1f7-df1af9e610cc] Running
E0816 00:26:06.189482   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/auto-697641/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004115774s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-758469 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-758469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-758469 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-616827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-616827 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (684.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-819398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-819398 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m24.411567871s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-819398 -n no-preload-819398
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (684.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (567.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-758469 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-758469 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m27.656983168s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-758469 -n embed-certs-758469
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (567.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (583.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-616827 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 00:28:39.934749   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:42.496815   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:43.006723   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:43.013105   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:43.024455   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:43.045903   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:43.087277   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:43.168739   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:43.330287   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:43.652002   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:44.293466   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:45.575428   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:47.618110   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:48.137039   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:50.409132   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/custom-flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:53.258919   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:28:57.859768   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:03.500427   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:15.368805   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/kindnet-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:18.341906   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:21.519983   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:21.526305   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:21.537653   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:21.558988   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:21.600375   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:21.681817   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:21.843451   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:22.165209   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:22.807374   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:23.982403   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:24.089215   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:26.651489   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:31.772993   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:29:42.014932   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/bridge-697641/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-616827 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m43.507732512s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616827 -n default-k8s-diff-port-616827
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (583.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-098619 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-098619 --alsologtostderr -v=3: (4.28491109s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-098619 -n old-k8s-version-098619: exit status 7 (63.272156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-098619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-504758 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0816 00:53:37.364615   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/enable-default-cni-697641/client.crt: no such file or directory" logger="UnhandledError"
E0816 00:53:43.006913   20078 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19452-12919/.minikube/profiles/flannel-697641/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-504758 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (50.640332959s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-504758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-504758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05289265s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-504758 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-504758 --alsologtostderr -v=3: (7.328789682s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-504758 -n newest-cni-504758
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-504758 -n newest-cni-504758: exit status 7 (62.77574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-504758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-504758 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-504758 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (37.021871573s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-504758 -n newest-cni-504758
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-504758 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-504758 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-504758 -n newest-cni-504758
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-504758 -n newest-cni-504758: exit status 2 (239.937266ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-504758 -n newest-cni-504758
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-504758 -n newest-cni-504758: exit status 2 (235.516691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-504758 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-504758 -n newest-cni-504758
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-504758 -n newest-cni-504758
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.30s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 2.76
263 TestNetworkPlugins/group/cilium 3.46
277 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-697641 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-697641" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-697641

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-697641"

                                                
                                                
----------------------- debugLogs end: kubenet-697641 [took: 2.623788734s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-697641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-697641
--- SKIP: TestNetworkPlugins/group/kubenet (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-697641 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-697641" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-697641

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-697641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-697641"

                                                
                                                
----------------------- debugLogs end: cilium-697641 [took: 3.32116423s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-697641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-697641
--- SKIP: TestNetworkPlugins/group/cilium (3.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-067133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-067133
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard